repo_name
stringlengths 8
38
| pr_number
int64 3
47.1k
| pr_title
stringlengths 8
175
| pr_description
stringlengths 2
19.8k
⌀ | author
null | date_created
stringlengths 25
25
| date_merged
stringlengths 25
25
| filepath
stringlengths 6
136
| before_content
stringlengths 54
884k
⌀ | after_content
stringlengths 56
884k
| pr_author
stringlengths 3
21
| previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| comment
stringlengths 2
25.4k
| comment_author
stringlengths 3
29
| __index_level_0__
int64 0
5.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
py-why/dowhy | 478 | Adding Non Linear Sensitivity Analysis | This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398
It implements two sensitivity analyzers:
1. For Partial Linear DGPs and estimators like LinearDML
2. For general non-parametric DGPs and estimators like KernelDML.
The notebook in this PR provides an introduction on how the sensitivity bounds are calculated for the partial linear case. For the general nonparametric DGPs, we need to estimate a special function called the Reisz representer. For binary treatment, it is exactly the difference in outcome weighted by propensity score. So we provide two options to learn the Reisz representer, 1) plugin_reisz that uses the propensity score; and 2) general estimator that uses a custom loss function. These two are in the file reisz.py.
Briefly, the sensitivity bounds depend on two parameters that denote the effect of the unobserved confounder on treatment and outcome. That's why we use the same API as for the `add_unobserved_common_cause` method and add this sensitivity analysis as a possible simulation method="non-parametric-partial-R2". The format of the plots is identical to those from the "linear-partial-r2" simulation method that is already implemented.
We provide two modes for the user.
1) User specifies the effect strength parameters themselves, as a range of values.
2) User benchmarks the effect strength parameters as a multiple of the same parameters for the observed common causes.
Signed-off-by: anusha <anushaagarwal2000.com> | null | 2022-06-20 14:37:11+00:00 | 2022-09-16 03:57:26+00:00 | dowhy/causal_refuters/linear_sensitivity_analyzer.py | import logging
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from scipy.stats import t
from dowhy.utils.api import parse_state
class LinearSensitivityAnalyzer:
"""
Class to perform sensitivity analysis
See: https://carloscinelli.com/files/Cinelli%20and%20Hazlett%20(2020)%20-%20Making%20Sense%20of%20Sensitivity.pdf
:param estimator: linear estimator of the causal model
:param data: Pandas dataframe
:param treatment_name: name of treatment
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = True)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param frac_strength_treatment: strength of association between unobserved confounder and treatment compared to benchmark covariate
:param frac_strength_outcome: strength of association between unobserved confounder and outcome compared to benchmark covariate
:param common_causes_order: The order of column names in OLS regression data
"""
def __init__(
self,
estimator=None,
data=None,
treatment_name=None,
percent_change_estimate=1.0,
significance_level=0.05,
confounder_increases_estimate=True,
benchmark_common_causes=None,
null_hypothesis_effect=0,
frac_strength_treatment=None,
frac_strength_outcome=None,
common_causes_order=None,
):
self.data = data
self.treatment_name = []
# original_treatment_name: : stores original variable names for labelling
self.original_treatment_name = treatment_name
for t in range(len(treatment_name)):
self.treatment_name.append("x" + str(t + 1))
self.percent_change_estimate = percent_change_estimate
self.significance_level = significance_level
self.confounder_increases_estimate = confounder_increases_estimate
self.estimator = estimator
self.estimator_model = estimator.model
self.null_hypothesis_effect = null_hypothesis_effect
# common_causes_map : maps the original variable names to variable names in OLS regression
self.common_causes_map = {}
for i in range(len(common_causes_order)):
self.common_causes_map[common_causes_order[i]] = "x" + str(len(self.treatment_name) + i + 1)
# benchmark_common_causes: stores variable names in terms of regression model variables
benchmark_common_causes = parse_state(benchmark_common_causes)
self.benchmark_common_causes = []
# original_benchmark_covariates: stores original variable names for labelling
self.original_benchmark_covariates = benchmark_common_causes
for i in range(len(benchmark_common_causes)):
self.benchmark_common_causes.append(self.common_causes_map[benchmark_common_causes[i]])
if type(frac_strength_treatment) in [int, list, float]:
self.frac_strength_treatment = np.array(frac_strength_treatment)
if type(frac_strength_outcome) in [int, list, float]:
self.frac_strength_outcome = np.array(frac_strength_outcome)
# estimate: estimate of regression
self.estimate = None
# degree_of_freedom: degree of freedom of error in regression
self.degree_of_freedom = None
# standard_error: standard error in regression
self.standard_error = None
# t_stats: Treatment coefficient t-value - measures how many standard errors the estimate is away from zero.
self.t_stats = None
# partial_f2: value to determine if a regression model and a nested version of it have a statistically significant difference between them
self.partial_f2 = None
# r2tu_w: partial R^2 of unobserved confounder "u" with treatment "t", after conditioning on observed covariates "w"
self.r2tu_w = None
# r2yu_tw: partial R^2 of unobserved confounder "u" with outcome "y", after conditioning on observed covariates "w" and treatment "t"
self.r2yu_tw = None
# r2twj_w: partial R^2 of observed covariate wj with treatment "t", after conditioning on observed covariates "w" excluding wj
self.r2twj_w = None
# r2ywj_tw: partial R^2 of observed covariate wj with outcome "y", after conditioning on observed covariates "w" (excluding wj) and treatment "t"
self.r2ywj_tw = None
# benchmarking_results: dataframe containing information about bounds and bias adjusted terms
self.benchmarking_results = None
# stats: dictionary containing information like robustness value, partial R^2, estimate, standard error , degree of freedom, partial f^2, t-statistic
self.stats = None
self.logger = logging.getLogger(__name__)
def treatment_regression(self):
"""
Function to perform regression with treatment as outcome
:returns: new OLS regression model
"""
features = self.estimator._observed_common_causes.copy()
treatment_df = self.estimator._treatment.copy()
features = sm.tools.add_constant(features)
features.rename(columns=self.common_causes_map, inplace=True)
model = sm.OLS(treatment_df, features)
estimator_model = model.fit()
return estimator_model
def partial_r2_func(self, estimator_model=None, treatment=None):
"""
Computes the partial R^2 of regression model
:param estimator_model: Linear regression model
:param treatment: treatment name
:returns: partial R^2 value
"""
estimate = estimator_model.params[treatment]
degree_of_freedom = int(estimator_model.df_resid)
if np.isscalar(estimate): # for single covariate
t_stats = estimator_model.tvalues[treatment]
return t_stats**2 / (t_stats**2 + degree_of_freedom)
else: # compute for a group of covariates
covariance_matrix = estimator_model.cov_params().loc[treatment, :][treatment]
n = len(estimate) # number of parameters in model
f_stat = (
np.matmul(np.matmul(estimate.values.T, np.linalg.inv(covariance_matrix.values)), estimate.values) / n
)
return f_stat * n / (f_stat * n + degree_of_freedom)
def robustness_value_func(self, alpha=1.0):
"""
Function to calculate the robustness value.
It is the minimum strength of association that confounders must have with treatment and outcome to change conclusions.
Robustness value describes how strong the association must be in order to reduce the estimated effect by (100 * percent_change_estimate)%.
Robustness value close to 1 means the treatment effect can handle strong confounders explaining almost all residual variation of the treatment and the outcome.
Robustness value close to 0 means that even very weak confounders can also change the results.
:param alpha: confidence interval (default = 1)
:returns: robustness value
"""
partial_cohen_f = abs(
self.t_stats / np.sqrt(self.degree_of_freedom)
) # partial f of treatment t with outcome y. f = t_val/sqrt(dof)
f_q = self.percent_change_estimate * partial_cohen_f
t_alpha_df_1 = t.ppf(
alpha / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
f_critical = abs(t_alpha_df_1) / np.sqrt(self.degree_of_freedom - 1)
f_adjusted = f_q - f_critical
if f_adjusted < 0:
r_value = 0
else:
r_value = 0.5 * (np.sqrt(f_adjusted**4 + (4 * f_adjusted**2)) - f_adjusted**2)
if f_adjusted > 0 and f_q > 1 / f_critical:
r_value = (f_q**2 - f_critical**2) / (1 + f_q**2)
return r_value
def compute_bias_adjusted(self, r2tu_w, r2yu_tw):
"""
Computes the bias adjusted estimate, standard error, t-value, partial R2, confidence intervals
:param r2tu_w: partial r^2 from regressing unobserved confounder u on treatment t after conditioning on observed covariates w
:param r2yu_tw: partial r^2 from regressing unobserved confounder u on outcome y after conditioning on observed covariates w and treatment t
:returns: Python dictionary with information about partial R^2 of confounders with treatment and outcome and bias adjusted variables
"""
bias_factor = np.sqrt((r2yu_tw * r2tu_w) / (1 - r2tu_w))
bias = bias_factor * (self.standard_error * np.sqrt(self.degree_of_freedom))
if self.confounder_increases_estimate:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) - bias)
else:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) + bias)
bias_adjusted_se = (
np.sqrt((1 - r2yu_tw) / (1 - r2tu_w))
* self.standard_error
* np.sqrt(self.degree_of_freedom / (self.degree_of_freedom - 1))
)
bias_adjusted_t = (bias_adjusted_estimate - self.null_hypothesis_effect) / bias_adjusted_se
bias_adjusted_partial_r2 = bias_adjusted_t**2 / (
bias_adjusted_t**2 + (self.degree_of_freedom - 1)
) # partial r2 formula used with new t value and dof - 1
num_se = t.ppf(
self.significance_level / 2, self.degree_of_freedom
) # Number of standard errors within Confidence Interval
bias_adjusted_upper_CI = bias_adjusted_estimate - num_se * bias_adjusted_se
bias_adjusted_lower_CI = bias_adjusted_estimate + num_se * bias_adjusted_se
benchmarking_results = {
"r2tu_w": r2tu_w,
"r2yu_tw": r2yu_tw,
"bias_adjusted_estimate": bias_adjusted_estimate,
"bias_adjusted_se": bias_adjusted_se,
"bias_adjusted_t": bias_adjusted_t,
"bias_adjusted_lower_CI": bias_adjusted_lower_CI,
"bias_adjusted_upper_CI": bias_adjusted_upper_CI,
}
return benchmarking_results
def check_sensitivity(self, plot=True):
"""
Function to perform sensitivity analysis.
:param plot: plot = True generates a plot of point estimate and the variations with respect to unobserved confounding.
plot = False overrides the setting
:returns: instance of LinearSensitivityAnalyzer class
"""
self.standard_error = np.array(self.estimator_model.bse[1 : (len(self.treatment_name) + 1)])[0]
self.degree_of_freedom = int(self.estimator_model.df_resid)
self.estimate = np.array(self.estimator_model.params[1 : (len(self.treatment_name) + 1)])[0]
self.t_stats = np.array(self.estimator_model.tvalues[self.treatment_name])[0]
# partial R^2 (r2yt_w) is the proportion of variation in outcome uniquely explained by treatment
partial_r2 = self.partial_r2_func(self.estimator_model, self.treatment_name)
RVq = self.robustness_value_func()
RV_qalpha = self.robustness_value_func(alpha=self.significance_level)
if self.confounder_increases_estimate:
self.null_hypothesis_effect = self.estimate * (1 - self.percent_change_estimate)
else:
self.null_hypothesis_effect = self.estimate * (1 + self.percent_change_estimate)
self.t_stats = (self.estimate - self.null_hypothesis_effect) / self.standard_error
self.partial_f2 = self.t_stats**2 / self.degree_of_freedom
# build a new regression model by considering treatment variables as outcome
treatment_linear_model = self.treatment_regression()
# r2twj_w is partial R^2 of covariate wj with treatment "t", after conditioning on covariates w(excluding wj)
# r2ywj_tw is partial R^2 of covariate wj with outcome "y", after conditioning on covariates w(excluding wj) and treatment "t"
self.r2twj_w = []
self.r2ywj_tw = []
for covariate in self.benchmark_common_causes:
self.r2ywj_tw.append(self.partial_r2_func(self.estimator_model, covariate))
self.r2twj_w.append(self.partial_r2_func(treatment_linear_model, covariate))
for i in range(len(self.benchmark_common_causes)):
r2twj_w = self.r2twj_w[i]
r2ywj_tw = self.r2ywj_tw[i]
# r2tu_w is the partial r^2 from regressing u on t after conditioning on w
self.r2tu_w = self.frac_strength_treatment * (r2twj_w / (1 - r2twj_w))
if any(val >= 1 for val in self.r2tu_w):
raise ValueError("r2tu_w can not be >= 1. Try a lower frac_strength_treatment value")
r2uwj_wt = (
self.frac_strength_treatment
* (r2twj_w**2)
/ ((1 - self.frac_strength_treatment * r2twj_w) * (1 - r2twj_w))
)
if any(val >= 1 for val in r2uwj_wt):
raise ValueError("r2uwj_wt can not be >= 1. Try a lower frac_strength_treatment value")
self.r2yu_tw = ((np.sqrt(self.frac_strength_outcome) + np.sqrt(r2uwj_wt)) / np.sqrt(1 - r2uwj_wt)) ** 2 * (
r2ywj_tw / (1 - r2ywj_tw)
)
if any(val > 1 for val in self.r2yu_tw):
for i in range(len(self.r2yu_tw)):
if self.r2yu_tw[i] > 1:
self.r2yu_tw[i] = 1
self.logger.warning(
"Warning: r2yu_tw can not be > 1. Try a lower frac_strength_treatment. Setting r2yu_tw to 1"
)
# Compute bias adjusted terms
self.benchmarking_results = self.compute_bias_adjusted(self.r2tu_w, self.r2yu_tw)
if plot == True:
self.plot()
self.stats = {
"estimate": self.estimate,
"standard_error": self.standard_error,
"degree of freedom": self.degree_of_freedom,
"t_statistic": self.t_stats,
"r2yt_w": partial_r2,
"partial_f2": self.partial_f2,
"robustness_value": RVq,
"robustness_value_alpha": RV_qalpha,
}
self.benchmarking_results = pd.DataFrame.from_dict(self.benchmarking_results)
return self
def plot_estimate(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting estimates.
Contour lines (z - axis) correspond to the adjusted estimate values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_estimate : threshold point
estimate_bounds : estimate values for unobserved confounders (bias adjusted estimates)
"""
critical_estimate = self.null_hypothesis_effect
contour_values = np.zeros((len(r2yu_tw), len(r2tu_w)))
for i in range(len(r2yu_tw)):
y = r2tu_w[i]
for j in range(len(r2tu_w)):
x = r2yu_tw[j]
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
estimate = benchmarking_results["bias_adjusted_estimate"]
contour_values[i][j] = estimate
estimate_bounds = self.benchmarking_results["bias_adjusted_estimate"]
return contour_values, critical_estimate, estimate_bounds
def plot_t(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting t.
Contour lines (z - axis) correspond to the adjusted t values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_t : threshold point
t_bounds : t-value for unobserved confounders (bias adjusted t values)
"""
t_alpha_df_1 = t.ppf(
self.significance_level / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
critical_t = abs(t_alpha_df_1) * np.sign(self.t_stats)
contour_values = []
for x in r2tu_w:
contour = []
for y in r2yu_tw:
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
t_value = benchmarking_results["bias_adjusted_t"]
contour.append(t_value)
contour_values.append(contour)
t_bounds = self.benchmarking_results["bias_adjusted_t"]
return contour_values, critical_t, t_bounds
def plot(
self,
plot_type="estimate",
critical_value=None,
x_limit=0.8,
y_limit=0.8,
num_points_per_contour=200,
plot_size=(7, 7),
contours_color="blue",
critical_contour_color="red",
label_fontsize=9,
contour_linewidths=0.75,
contour_linestyles="solid",
contours_label_color="black",
critical_label_color="red",
unadjusted_estimate_marker="D",
unadjusted_estimate_color="black",
adjusted_estimate_marker="^",
adjusted_estimate_color="red",
legend_position=(1.6, 0.6),
):
"""
Plots and summarizes the sensitivity bounds as a contour plot, as they vary with the partial R^2 of the unobserved confounder(s) with the treatment and the outcome
Two types of plots can be generated, based on adjusted estimates or adjusted t-values
X-axis: Partial R^2 of treatment and unobserved confounder(s)
Y-axis: Partial R^2 of outcome and unobserved confounder(s)
We also plot bounds on the partial R^2 of the unobserved confounders obtained from observed covariates
:param plot_type: "estimate" or "t-value"
:param critical_value: special reference value of the estimate or t-value that will be highlighted in the plot
:param x_limit: plot's maximum x_axis value (default = 0.8)
:param y_limit: plot's minimum y_axis value (default = 0.8)
:param num_points_per_contour: number of points to calculate and plot each contour line (default = 200)
:param plot_size: tuple denoting the size of the plot (default = (7,7))
:param contours_color: color of contour line (default = blue)
String or array. If array, lines will be plotted with the specific color in ascending order.
:param critical_contour_color: color of threshold line (default = red)
:param label_fontsize: fontsize for labelling contours (default = 9)
:param contour_linewidths: linewidths for contours (default = 0.75)
:param contour_linestyles: linestyles for contours (default = "solid")
See : https://matplotlib.org/3.5.0/gallery/lines_bars_and_markers/linestyles.html for more examples
:param contours_label_color: color of contour line label (default = black)
:param critical_label_color: color of threshold line label (default = red)
:param unadjusted_estimate_marker: marker type for unadjusted estimate in the plot (default = 'D')
See: https://matplotlib.org/stable/api/markers_api.html
:parm unadjusted_estimate_color: marker color for unadjusted estimate in the plot (default = "black")
:param adjusted_estimate_marker: marker type for bias adjusted estimates in the plot (default = '^')
:parm adjusted_estimate_color: marker color for bias adjusted estimates in the plot (default = "red")
:param legend_position:tuple denoting the position of the legend (default = (1.6, 0.6))
"""
# Plotting the contour plot
if plot_type == "estimate":
critical_value = 0 # default value of estimate
else:
critical_value = 2 # default t-value (usual approx for 95% CI)
fig, ax = plt.subplots(1, 1, figsize=plot_size)
ax.set_title("Sensitivity contour plot of %s" % plot_type)
ax.set_xlabel("Partial R^2 of confounder with treatment")
ax.set_ylabel("Partial R^2 of confounder with outcome")
for i in range(len(self.r2tu_w)):
x = self.r2tu_w[i]
y = self.r2yu_tw[i]
if x > 0.8 or y > 0.8:
x_limit = 0.99
y_limit = 0.99
break
r2tu_w = np.arange(0.0, x_limit, x_limit / num_points_per_contour)
r2yu_tw = np.arange(0.0, y_limit, y_limit / num_points_per_contour)
unadjusted_point_estimate = None
if plot_type == "estimate":
contour_values, critical_value, bound_values = self.plot_estimate(r2tu_w, r2yu_tw)
unadjusted_estimate = self.estimate
unadjusted_point_estimate = unadjusted_estimate
elif plot_type == "t-value":
contour_values, critical_value, bound_values = self.plot_t(r2tu_w, r2yu_tw)
unadjusted_t = self.t_stats
unadjusted_point_estimate = unadjusted_t
else:
raise ValueError("Current plotting method only supports 'estimate' and 't-value' ")
# Adding contours
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=contours_color,
linewidths=contour_linewidths,
linestyles=contour_linestyles,
)
ax.clabel(contour_plot, inline=1, fontsize=label_fontsize, colors=contours_label_color)
# Adding threshold contour line
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=critical_contour_color,
linewidths=contour_linewidths,
levels=[critical_value],
)
ax.clabel(contour_plot, [critical_value], inline=1, fontsize=label_fontsize, colors=critical_label_color)
# Adding unadjusted point estimate
ax.scatter(
[0],
[0],
marker=unadjusted_estimate_marker,
color=unadjusted_estimate_color,
label="Unadjusted({:1.2f})".format(unadjusted_point_estimate),
)
# Adding bounds to partial R^2 values for given strength of confounders
for i in range(len(self.frac_strength_treatment)):
frac_strength_treatment = self.frac_strength_treatment[i]
frac_strength_outcome = self.frac_strength_outcome[i]
if frac_strength_treatment == frac_strength_outcome:
signs = str(round(frac_strength_treatment, 2))
else:
signs = str(round(frac_strength_treatment, 2)) + "/" + str(round(frac_strength_outcome, 2))
label = (
str(i + 1)
+ " "
+ signs
+ " X "
+ str(self.original_benchmark_covariates)
+ " ({:1.2f}) ".format(bound_values[i])
)
ax.scatter(
self.r2tu_w[i],
self.r2yu_tw[i],
color=adjusted_estimate_color,
marker=adjusted_estimate_marker,
label=label,
)
ax.annotate(str(i + 1), (self.r2tu_w[i] + 0.005, self.r2yu_tw[i] + 0.005))
ax.legend(bbox_to_anchor=legend_position)
plt.show()
def __str__(self):
s = "Sensitivity Analysis to Unobserved Confounding using R^2 paramterization\n\n"
s += "Unadjusted Estimates of Treatment {0} :\n".format(self.original_treatment_name)
s += "Coefficient Estimate : {0}\n".format(self.estimate)
s += "Degree of Freedom : {0}\n".format(self.degree_of_freedom)
s += "Standard Error : {0}\n".format(self.standard_error)
s += "t-value : {0}\n".format(self.t_stats)
s += "F^2 value : {0}\n\n".format(self.partial_f2)
s += "Sensitivity Statistics : \n"
s += "Partial R2 of treatment with outcome : {0}\n".format(self.stats["r2yt_w"])
s += "Robustness Value : {0}\n\n".format(self.stats["robustness_value"])
s += "Interpretation of results :\n"
s += "Any confounder explaining less than {0}% percent of the residual variance of both the treatment and the outcome would not be strong enough to explain away the observed effect i.e bring down the estimate to 0 \n\n".format(
round(self.stats["robustness_value"] * 100, 2)
)
s += "For a significance level of {0}%, any confounder explaining more than {1}% percent of the residual variance of both the treatment and the outcome would be strong enough to make the estimated effect not 'statistically significant'\n\n".format(
self.significance_level * 100, round(self.stats["robustness_value_alpha"] * 100, 2)
)
s += "If confounders explained 100% of the residual variance of the outcome, they would need to explain at least {0}% of the residual variance of the treatment to bring down the estimated effect to 0\n".format(
round(self.stats["r2yt_w"] * 100, 2)
)
return s
| import logging
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from scipy.stats import t
from dowhy.utils.api import parse_state
class LinearSensitivityAnalyzer:
"""
Class to perform sensitivity analysis
See: https://carloscinelli.com/files/Cinelli%20and%20Hazlett%20(2020)%20-%20Making%20Sense%20of%20Sensitivity.pdf
:param estimator: linear estimator of the causal model
:param data: Pandas dataframe
:param treatment_name: name of treatment
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1)
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0.
:param null_hypothesis_effect: assumed effect under the null hypothesis
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = True)
:param benchmark_common_causes: names of variables for bounding strength of confounders
:param significance_level: confidence interval for statistical inference(default = 0.05)
:param frac_strength_treatment: strength of association between unobserved confounder and treatment compared to benchmark covariate
:param frac_strength_outcome: strength of association between unobserved confounder and outcome compared to benchmark covariate
:param common_causes_order: The order of column names in OLS regression data
"""
def __init__(
self,
estimator=None,
data=None,
treatment_name=None,
percent_change_estimate=1.0,
significance_level=0.05,
confounder_increases_estimate=True,
benchmark_common_causes=None,
null_hypothesis_effect=0,
frac_strength_treatment=None,
frac_strength_outcome=None,
common_causes_order=None,
):
self.data = data
self.treatment_name = []
# original_treatment_name: : stores original variable names for labelling
self.original_treatment_name = treatment_name
for t in range(len(treatment_name)):
self.treatment_name.append("x" + str(t + 1))
self.percent_change_estimate = percent_change_estimate
self.significance_level = significance_level
self.confounder_increases_estimate = confounder_increases_estimate
self.estimator = estimator
self.estimator_model = estimator.model
self.null_hypothesis_effect = null_hypothesis_effect
# common_causes_map : maps the original variable names to variable names in OLS regression
self.common_causes_map = {}
for i in range(len(common_causes_order)):
self.common_causes_map[common_causes_order[i]] = "x" + str(len(self.treatment_name) + i + 1)
# benchmark_common_causes: stores variable names in terms of regression model variables
benchmark_common_causes = parse_state(benchmark_common_causes)
self.benchmark_common_causes = []
# original_benchmark_covariates: stores original variable names for labelling
self.original_benchmark_covariates = benchmark_common_causes
for i in range(len(benchmark_common_causes)):
self.benchmark_common_causes.append(self.common_causes_map[benchmark_common_causes[i]])
if type(frac_strength_treatment) in [int, list, float]:
self.frac_strength_treatment = np.array(frac_strength_treatment)
if type(frac_strength_outcome) in [int, list, float]:
self.frac_strength_outcome = np.array(frac_strength_outcome)
# estimate: estimate of regression
self.estimate = None
# degree_of_freedom: degree of freedom of error in regression
self.degree_of_freedom = None
# standard_error: standard error in regression
self.standard_error = None
# t_stats: Treatment coefficient t-value - measures how many standard errors the estimate is away from zero.
self.t_stats = None
# partial_f2: value to determine if a regression model and a nested version of it have a statistically significant difference between them
self.partial_f2 = None
# r2tu_w: partial R^2 of unobserved confounder "u" with treatment "t", after conditioning on observed covariates "w"
self.r2tu_w = None
# r2yu_tw: partial R^2 of unobserved confounder "u" with outcome "y", after conditioning on observed covariates "w" and treatment "t"
self.r2yu_tw = None
# r2twj_w: partial R^2 of observed covariate wj with treatment "t", after conditioning on observed covariates "w" excluding wj
self.r2twj_w = None
# r2ywj_tw: partial R^2 of observed covariate wj with outcome "y", after conditioning on observed covariates "w" (excluding wj) and treatment "t"
self.r2ywj_tw = None
# benchmarking_results: dataframe containing information about bounds and bias adjusted terms
self.benchmarking_results = None
# stats: dictionary containing information like robustness value, partial R^2, estimate, standard error , degree of freedom, partial f^2, t-statistic
self.stats = None
self.logger = logging.getLogger(__name__)
def treatment_regression(self):
"""
Function to perform regression with treatment as outcome
:returns: new OLS regression model
"""
features = self.estimator._observed_common_causes.copy()
treatment_df = self.estimator._treatment.copy()
features = sm.tools.add_constant(features)
features.rename(columns=self.common_causes_map, inplace=True)
model = sm.OLS(treatment_df, features)
estimator_model = model.fit()
return estimator_model
def partial_r2_func(self, estimator_model=None, treatment=None):
"""
Computes the partial R^2 of regression model
:param estimator_model: Linear regression model
:param treatment: treatment name
:returns: partial R^2 value
"""
estimate = estimator_model.params[treatment]
degree_of_freedom = int(estimator_model.df_resid)
if np.isscalar(estimate): # for single covariate
t_stats = estimator_model.tvalues[treatment]
return t_stats**2 / (t_stats**2 + degree_of_freedom)
else: # compute for a group of covariates
covariance_matrix = estimator_model.cov_params().loc[treatment, :][treatment]
n = len(estimate) # number of parameters in model
f_stat = (
np.matmul(np.matmul(estimate.values.T, np.linalg.inv(covariance_matrix.values)), estimate.values) / n
)
return f_stat * n / (f_stat * n + degree_of_freedom)
def robustness_value_func(self, alpha=1.0):
"""
Function to calculate the robustness value.
It is the minimum strength of association that confounders must have with treatment and outcome to change conclusions.
Robustness value describes how strong the association must be in order to reduce the estimated effect by (100 * percent_change_estimate)%.
Robustness value close to 1 means the treatment effect can handle strong confounders explaining almost all residual variation of the treatment and the outcome.
Robustness value close to 0 means that even very weak confounders can also change the results.
:param alpha: confidence interval (default = 1)
:returns: robustness value
"""
partial_cohen_f = abs(
self.t_stats / np.sqrt(self.degree_of_freedom)
) # partial f of treatment t with outcome y. f = t_val/sqrt(dof)
f_q = self.percent_change_estimate * partial_cohen_f
t_alpha_df_1 = t.ppf(
alpha / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
f_critical = abs(t_alpha_df_1) / np.sqrt(self.degree_of_freedom - 1)
f_adjusted = f_q - f_critical
if f_adjusted < 0:
r_value = 0
else:
r_value = 0.5 * (np.sqrt(f_adjusted**4 + (4 * f_adjusted**2)) - f_adjusted**2)
if f_adjusted > 0 and f_q > 1 / f_critical:
r_value = (f_q**2 - f_critical**2) / (1 + f_q**2)
return r_value
def compute_bias_adjusted(self, r2tu_w, r2yu_tw):
"""
Computes the bias adjusted estimate, standard error, t-value, partial R2, confidence intervals
:param r2tu_w: partial r^2 from regressing unobserved confounder u on treatment t after conditioning on observed covariates w
:param r2yu_tw: partial r^2 from regressing unobserved confounder u on outcome y after conditioning on observed covariates w and treatment t
:returns: Python dictionary with information about partial R^2 of confounders with treatment and outcome and bias adjusted variables
"""
bias_factor = np.sqrt((r2yu_tw * r2tu_w) / (1 - r2tu_w))
bias = bias_factor * (self.standard_error * np.sqrt(self.degree_of_freedom))
if self.confounder_increases_estimate:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) - bias)
else:
bias_adjusted_estimate = np.sign(self.estimate) * (abs(self.estimate) + bias)
bias_adjusted_se = (
np.sqrt((1 - r2yu_tw) / (1 - r2tu_w))
* self.standard_error
* np.sqrt(self.degree_of_freedom / (self.degree_of_freedom - 1))
)
bias_adjusted_t = (bias_adjusted_estimate - self.null_hypothesis_effect) / bias_adjusted_se
bias_adjusted_partial_r2 = bias_adjusted_t**2 / (
bias_adjusted_t**2 + (self.degree_of_freedom - 1)
) # partial r2 formula used with new t value and dof - 1
num_se = t.ppf(
self.significance_level / 2, self.degree_of_freedom
) # Number of standard errors within Confidence Interval
bias_adjusted_upper_CI = bias_adjusted_estimate - num_se * bias_adjusted_se
bias_adjusted_lower_CI = bias_adjusted_estimate + num_se * bias_adjusted_se
benchmarking_results = {
"r2tu_w": r2tu_w,
"r2yu_tw": r2yu_tw,
"bias_adjusted_estimate": bias_adjusted_estimate,
"bias_adjusted_se": bias_adjusted_se,
"bias_adjusted_t": bias_adjusted_t,
"bias_adjusted_lower_CI": bias_adjusted_lower_CI,
"bias_adjusted_upper_CI": bias_adjusted_upper_CI,
}
return benchmarking_results
def check_sensitivity(self, plot=True):
"""
Function to perform sensitivity analysis.
:param plot: plot = True generates a plot of point estimate and the variations with respect to unobserved confounding.
plot = False overrides the setting
:returns: instance of LinearSensitivityAnalyzer class
"""
self.standard_error = np.array(self.estimator_model.bse[1 : (len(self.treatment_name) + 1)])[0]
self.degree_of_freedom = int(self.estimator_model.df_resid)
self.estimate = np.array(self.estimator_model.params[1 : (len(self.treatment_name) + 1)])[0]
self.t_stats = np.array(self.estimator_model.tvalues[self.treatment_name])[0]
# partial R^2 (r2yt_w) is the proportion of variation in outcome uniquely explained by treatment
partial_r2 = self.partial_r2_func(self.estimator_model, self.treatment_name)
RVq = self.robustness_value_func()
RV_qalpha = self.robustness_value_func(alpha=self.significance_level)
if self.confounder_increases_estimate:
self.null_hypothesis_effect = self.estimate * (1 - self.percent_change_estimate)
else:
self.null_hypothesis_effect = self.estimate * (1 + self.percent_change_estimate)
self.t_stats = (self.estimate - self.null_hypothesis_effect) / self.standard_error
self.partial_f2 = self.t_stats**2 / self.degree_of_freedom
# build a new regression model by considering treatment variables as outcome
treatment_linear_model = self.treatment_regression()
# r2twj_w is partial R^2 of covariate wj with treatment "t", after conditioning on covariates w(excluding wj)
# r2ywj_tw is partial R^2 of covariate wj with outcome "y", after conditioning on covariates w(excluding wj) and treatment "t"
self.r2twj_w = []
self.r2ywj_tw = []
for covariate in self.benchmark_common_causes:
self.r2ywj_tw.append(self.partial_r2_func(self.estimator_model, covariate))
self.r2twj_w.append(self.partial_r2_func(treatment_linear_model, covariate))
for i in range(len(self.benchmark_common_causes)):
r2twj_w = self.r2twj_w[i]
r2ywj_tw = self.r2ywj_tw[i]
# r2tu_w is the partial r^2 from regressing u on t after conditioning on w
self.r2tu_w = self.frac_strength_treatment * (r2twj_w / (1 - r2twj_w))
if any(val >= 1 for val in self.r2tu_w):
raise ValueError("r2tu_w can not be >= 1. Try a lower frac_strength_treatment value")
r2uwj_wt = (
self.frac_strength_treatment
* (r2twj_w**2)
/ ((1 - self.frac_strength_treatment * r2twj_w) * (1 - r2twj_w))
)
if any(val >= 1 for val in r2uwj_wt):
raise ValueError("r2uwj_wt can not be >= 1. Try a lower frac_strength_treatment value")
self.r2yu_tw = ((np.sqrt(self.frac_strength_outcome) + np.sqrt(r2uwj_wt)) / np.sqrt(1 - r2uwj_wt)) ** 2 * (
r2ywj_tw / (1 - r2ywj_tw)
)
if any(val > 1 for val in self.r2yu_tw):
for i in range(len(self.r2yu_tw)):
if self.r2yu_tw[i] > 1:
self.r2yu_tw[i] = 1
self.logger.warning(
"Warning: r2yu_tw can not be > 1. Try a lower frac_strength_treatment. Setting r2yu_tw to 1"
)
# Compute bias adjusted terms
self.benchmarking_results = self.compute_bias_adjusted(self.r2tu_w, self.r2yu_tw)
if plot == True:
self.plot()
self.stats = {
"estimate": self.estimate,
"standard_error": self.standard_error,
"degree of freedom": self.degree_of_freedom,
"t_statistic": self.t_stats,
"r2yt_w": partial_r2,
"partial_f2": self.partial_f2,
"robustness_value": RVq,
"robustness_value_alpha": RV_qalpha,
}
self.benchmarking_results = pd.DataFrame.from_dict(self.benchmarking_results)
return self
def plot_estimate(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting estimates.
Contour lines (z - axis) correspond to the adjusted estimate values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_estimate : threshold point
estimate_bounds : estimate values for unobserved confounders (bias adjusted estimates)
"""
critical_estimate = self.null_hypothesis_effect
contour_values = np.zeros((len(r2yu_tw), len(r2tu_w)))
for i in range(len(r2yu_tw)):
y = r2yu_tw[i]
for j in range(len(r2tu_w)):
x = r2tu_w[j]
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
estimate = benchmarking_results["bias_adjusted_estimate"]
contour_values[i][j] = estimate
estimate_bounds = self.benchmarking_results["bias_adjusted_estimate"]
return contour_values, critical_estimate, estimate_bounds
def plot_t(self, r2tu_w, r2yu_tw):
"""
Computes the contours, threshold line and bounds for plotting t.
Contour lines (z - axis) correspond to the adjusted t values for different values of r2tu_w (x) and r2yu_tw (y).
:param r2tu_w: hypothetical partial R^2 of confounder with treatment(x - axis)
:param r2yu_tw: hypothetical partial R^2 of confounder with outcome(y - axis)
:returns:
contour_values : values of contour lines for the plot
critical_t : threshold point
t_bounds : t-value for unobserved confounders (bias adjusted t values)
"""
t_alpha_df_1 = t.ppf(
self.significance_level / 2, self.degree_of_freedom - 1
) # t-value threshold with alpha significance level and dof-1 degrees of freedom
critical_t = abs(t_alpha_df_1) * np.sign(self.t_stats)
contour_values = []
for x in r2tu_w:
contour = []
for y in r2yu_tw:
benchmarking_results = self.compute_bias_adjusted(r2tu_w=x, r2yu_tw=y)
t_value = benchmarking_results["bias_adjusted_t"]
contour.append(t_value)
contour_values.append(contour)
t_bounds = self.benchmarking_results["bias_adjusted_t"]
return contour_values, critical_t, t_bounds
def plot(
self,
plot_type="estimate",
critical_value=None,
x_limit=0.8,
y_limit=0.8,
num_points_per_contour=200,
plot_size=(7, 7),
contours_color="blue",
critical_contour_color="red",
label_fontsize=9,
contour_linewidths=0.75,
contour_linestyles="solid",
contours_label_color="black",
critical_label_color="red",
unadjusted_estimate_marker="D",
unadjusted_estimate_color="black",
adjusted_estimate_marker="^",
adjusted_estimate_color="red",
legend_position=(1.6, 0.6),
):
"""
Plots and summarizes the sensitivity bounds as a contour plot, as they vary with the partial R^2 of the unobserved confounder(s) with the treatment and the outcome
Two types of plots can be generated, based on adjusted estimates or adjusted t-values
X-axis: Partial R^2 of treatment and unobserved confounder(s)
Y-axis: Partial R^2 of outcome and unobserved confounder(s)
We also plot bounds on the partial R^2 of the unobserved confounders obtained from observed covariates
:param plot_type: "estimate" or "t-value"
:param critical_value: special reference value of the estimate or t-value that will be highlighted in the plot
:param x_limit: plot's maximum x_axis value (default = 0.8)
:param y_limit: plot's minimum y_axis value (default = 0.8)
:param num_points_per_contour: number of points to calculate and plot each contour line (default = 200)
:param plot_size: tuple denoting the size of the plot (default = (7,7))
:param contours_color: color of contour line (default = blue)
String or array. If array, lines will be plotted with the specific color in ascending order.
:param critical_contour_color: color of threshold line (default = red)
:param label_fontsize: fontsize for labelling contours (default = 9)
:param contour_linewidths: linewidths for contours (default = 0.75)
:param contour_linestyles: linestyles for contours (default = "solid")
See : https://matplotlib.org/3.5.0/gallery/lines_bars_and_markers/linestyles.html for more examples
:param contours_label_color: color of contour line label (default = black)
:param critical_label_color: color of threshold line label (default = red)
:param unadjusted_estimate_marker: marker type for unadjusted estimate in the plot (default = 'D')
See: https://matplotlib.org/stable/api/markers_api.html
:parm unadjusted_estimate_color: marker color for unadjusted estimate in the plot (default = "black")
:param adjusted_estimate_marker: marker type for bias adjusted estimates in the plot (default = '^')
:parm adjusted_estimate_color: marker color for bias adjusted estimates in the plot (default = "red")
:param legend_position:tuple denoting the position of the legend (default = (1.6, 0.6))
"""
# Plotting the contour plot
if plot_type == "estimate":
critical_value = 0 # default value of estimate
else:
critical_value = 2 # default t-value (usual approx for 95% CI)
fig, ax = plt.subplots(1, 1, figsize=plot_size)
ax.set_title("Sensitivity contour plot of %s" % plot_type)
ax.set_xlabel("Partial R^2 of confounder with treatment")
ax.set_ylabel("Partial R^2 of confounder with outcome")
for i in range(len(self.r2tu_w)):
x = self.r2tu_w[i]
y = self.r2yu_tw[i]
if x > 0.8 or y > 0.8:
x_limit = 0.99
y_limit = 0.99
break
r2tu_w = np.arange(0.0, x_limit, x_limit / num_points_per_contour)
r2yu_tw = np.arange(0.0, y_limit, y_limit / num_points_per_contour)
unadjusted_point_estimate = None
if plot_type == "estimate":
contour_values, critical_value, bound_values = self.plot_estimate(r2tu_w, r2yu_tw)
unadjusted_estimate = self.estimate
unadjusted_point_estimate = unadjusted_estimate
elif plot_type == "t-value":
contour_values, critical_value, bound_values = self.plot_t(r2tu_w, r2yu_tw)
unadjusted_t = self.t_stats
unadjusted_point_estimate = unadjusted_t
else:
raise ValueError("Current plotting method only supports 'estimate' and 't-value' ")
# Adding contours
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=contours_color,
linewidths=contour_linewidths,
linestyles=contour_linestyles,
)
ax.clabel(contour_plot, inline=1, fontsize=label_fontsize, colors=contours_label_color)
# Adding threshold contour line
contour_plot = ax.contour(
r2tu_w,
r2yu_tw,
contour_values,
colors=critical_contour_color,
linewidths=contour_linewidths,
levels=[critical_value],
)
ax.clabel(contour_plot, [critical_value], inline=1, fontsize=label_fontsize, colors=critical_label_color)
# Adding unadjusted point estimate
ax.scatter(
[0],
[0],
marker=unadjusted_estimate_marker,
color=unadjusted_estimate_color,
label="Unadjusted({:1.2f})".format(unadjusted_point_estimate),
)
# Adding bounds to partial R^2 values for given strength of confounders
for i in range(len(self.frac_strength_treatment)):
frac_strength_treatment = self.frac_strength_treatment[i]
frac_strength_outcome = self.frac_strength_outcome[i]
if frac_strength_treatment == frac_strength_outcome:
signs = str(round(frac_strength_treatment, 2))
else:
signs = str(round(frac_strength_treatment, 2)) + "/" + str(round(frac_strength_outcome, 2))
label = (
str(i + 1)
+ " "
+ signs
+ " X "
+ str(self.original_benchmark_covariates)
+ " ({:1.2f}) ".format(bound_values[i])
)
ax.scatter(
self.r2tu_w[i],
self.r2yu_tw[i],
color=adjusted_estimate_color,
marker=adjusted_estimate_marker,
label=label,
)
ax.annotate(str(i + 1), (self.r2tu_w[i] + 0.005, self.r2yu_tw[i] + 0.005))
ax.legend(bbox_to_anchor=legend_position)
plt.show()
def __str__(self):
s = "Sensitivity Analysis to Unobserved Confounding using R^2 paramterization\n\n"
s += "Unadjusted Estimates of Treatment {0} :\n".format(self.original_treatment_name)
s += "Coefficient Estimate : {0}\n".format(self.estimate)
s += "Degree of Freedom : {0}\n".format(self.degree_of_freedom)
s += "Standard Error : {0}\n".format(self.standard_error)
s += "t-value : {0}\n".format(self.t_stats)
s += "F^2 value : {0}\n\n".format(self.partial_f2)
s += "Sensitivity Statistics : \n"
s += "Partial R2 of treatment with outcome : {0}\n".format(self.stats["r2yt_w"])
s += "Robustness Value : {0}\n\n".format(self.stats["robustness_value"])
s += "Interpretation of results :\n"
s += "Any confounder explaining less than {0}% percent of the residual variance of both the treatment and the outcome would not be strong enough to explain away the observed effect i.e bring down the estimate to 0 \n\n".format(
round(self.stats["robustness_value"] * 100, 2)
)
s += "For a significance level of {0}%, any confounder explaining more than {1}% percent of the residual variance of both the treatment and the outcome would be strong enough to make the estimated effect not 'statistically significant'\n\n".format(
self.significance_level * 100, round(self.stats["robustness_value_alpha"] * 100, 2)
)
s += "If confounders explained 100% of the residual variance of the outcome, they would need to explain at least {0}% of the residual variance of the treatment to bring down the estimated effect to 0\n".format(
round(self.stats["r2yt_w"] * 100, 2)
)
return s
| anusha0409 | 81841c697bd5e80ecf9e731432305f6186666f1f | bb446c333f2256074304b0dec9cb5628d284b542 | yes, this is to fix a bug in the prior code. | amit-sharma | 384 |
mdn/kuma | 8,071 | user with is_staff=True should be subscribers | null | null | 2022-04-06 11:04:23+00:00 | 2022-04-07 13:25:10+00:00 | kuma/users/tasks.py | import json
from celery import task
from django.contrib.auth import get_user_model
from kuma.users.auth import KumaOIDCAuthenticationBackend
from kuma.users.models import AccountEvent, UserProfile
from kuma.users.utils import get_valid_subscription_type_or_none
@task
def process_event_delete_user(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
except get_user_model().DoesNotExist:
return
user.delete()
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_subscription_state_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
payload = json.loads(event.payload)
last_event = AccountEvent.objects.filter(
fxa_uid=event.fxa_uid,
status=AccountEvent.EventStatus.PROCESSED,
event_type=AccountEvent.EventType.SUBSCRIPTION_CHANGED,
).first()
if last_event:
last_event_payload = json.loads(last_event.payload)
if last_event_payload["changeTime"] >= payload["changeTime"]:
event.status = AccountEvent.EventStatus.IGNORED
event.save()
return
if "mdn_plus" in payload["capabilities"]:
if payload["isActive"]:
profile.is_subscriber = True
profile.subscription_type = get_valid_subscription_type_or_none(
payload["capabilities"]
)
else:
profile.is_subscriber = False
profile.save()
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
@task
def process_event_password_change(event_id):
event = AccountEvent.objects.get(id=event_id)
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_profile_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
refresh_token = profile.fxa_refresh_token
if not refresh_token:
event.status = AccountEvent.IGNORED
event.save()
return
fxa = KumaOIDCAuthenticationBackend()
token_info = fxa.get_token(
{
"client_id": fxa.OIDC_RP_CLIENT_ID,
"client_secret": fxa.OIDC_RP_CLIENT_SECRET,
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"ttl": 60 * 5,
}
)
access_token = token_info.get("access_token")
user_info = fxa.get_userinfo(access_token, None, None)
fxa.update_user(user, user_info)
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
| import json
from celery import task
from django.contrib.auth import get_user_model
from kuma.users.auth import KumaOIDCAuthenticationBackend
from kuma.users.models import AccountEvent, UserProfile
from kuma.users.utils import get_valid_subscription_type_or_none
@task
def process_event_delete_user(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
except get_user_model().DoesNotExist:
return
user.delete()
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_subscription_state_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
payload = json.loads(event.payload)
last_event = AccountEvent.objects.filter(
fxa_uid=event.fxa_uid,
status=AccountEvent.EventStatus.PROCESSED,
event_type=AccountEvent.EventType.SUBSCRIPTION_CHANGED,
).first()
if last_event:
last_event_payload = json.loads(last_event.payload)
if last_event_payload["changeTime"] >= payload["changeTime"]:
event.status = AccountEvent.EventStatus.IGNORED
event.save()
return
if "mdn_plus" in payload["capabilities"] and not user.is_staff:
if payload["isActive"]:
profile.is_subscriber = True
profile.subscription_type = get_valid_subscription_type_or_none(
payload["capabilities"]
)
else:
profile.is_subscriber = False
profile.save()
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
@task
def process_event_password_change(event_id):
event = AccountEvent.objects.get(id=event_id)
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_profile_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
refresh_token = profile.fxa_refresh_token
if not refresh_token:
event.status = AccountEvent.IGNORED
event.save()
return
fxa = KumaOIDCAuthenticationBackend()
token_info = fxa.get_token(
{
"client_id": fxa.OIDC_RP_CLIENT_ID,
"client_secret": fxa.OIDC_RP_CLIENT_SECRET,
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"ttl": 60 * 5,
}
)
access_token = token_info.get("access_token")
user_info = fxa.get_userinfo(access_token, None, None)
fxa.update_user(user, user_info)
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
| fiji-flo | 57285dcf43694852d10e68159136c39a3e63cb29 | e1cef9d866060531069a0a73405f42b012c61627 | Don't update capabilities for `is_staff`. (We need to clean this manually). | fiji-flo | 0 |
mdn/kuma | 8,071 | user with is_staff=True should be subscribers | null | null | 2022-04-06 11:04:23+00:00 | 2022-04-07 13:25:10+00:00 | kuma/users/tasks.py | import json
from celery import task
from django.contrib.auth import get_user_model
from kuma.users.auth import KumaOIDCAuthenticationBackend
from kuma.users.models import AccountEvent, UserProfile
from kuma.users.utils import get_valid_subscription_type_or_none
@task
def process_event_delete_user(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
except get_user_model().DoesNotExist:
return
user.delete()
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_subscription_state_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
payload = json.loads(event.payload)
last_event = AccountEvent.objects.filter(
fxa_uid=event.fxa_uid,
status=AccountEvent.EventStatus.PROCESSED,
event_type=AccountEvent.EventType.SUBSCRIPTION_CHANGED,
).first()
if last_event:
last_event_payload = json.loads(last_event.payload)
if last_event_payload["changeTime"] >= payload["changeTime"]:
event.status = AccountEvent.EventStatus.IGNORED
event.save()
return
if "mdn_plus" in payload["capabilities"]:
if payload["isActive"]:
profile.is_subscriber = True
profile.subscription_type = get_valid_subscription_type_or_none(
payload["capabilities"]
)
else:
profile.is_subscriber = False
profile.save()
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
@task
def process_event_password_change(event_id):
event = AccountEvent.objects.get(id=event_id)
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_profile_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
refresh_token = profile.fxa_refresh_token
if not refresh_token:
event.status = AccountEvent.IGNORED
event.save()
return
fxa = KumaOIDCAuthenticationBackend()
token_info = fxa.get_token(
{
"client_id": fxa.OIDC_RP_CLIENT_ID,
"client_secret": fxa.OIDC_RP_CLIENT_SECRET,
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"ttl": 60 * 5,
}
)
access_token = token_info.get("access_token")
user_info = fxa.get_userinfo(access_token, None, None)
fxa.update_user(user, user_info)
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
| import json
from celery import task
from django.contrib.auth import get_user_model
from kuma.users.auth import KumaOIDCAuthenticationBackend
from kuma.users.models import AccountEvent, UserProfile
from kuma.users.utils import get_valid_subscription_type_or_none
@task
def process_event_delete_user(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
except get_user_model().DoesNotExist:
return
user.delete()
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_subscription_state_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
payload = json.loads(event.payload)
last_event = AccountEvent.objects.filter(
fxa_uid=event.fxa_uid,
status=AccountEvent.EventStatus.PROCESSED,
event_type=AccountEvent.EventType.SUBSCRIPTION_CHANGED,
).first()
if last_event:
last_event_payload = json.loads(last_event.payload)
if last_event_payload["changeTime"] >= payload["changeTime"]:
event.status = AccountEvent.EventStatus.IGNORED
event.save()
return
if "mdn_plus" in payload["capabilities"] and not user.is_staff:
if payload["isActive"]:
profile.is_subscriber = True
profile.subscription_type = get_valid_subscription_type_or_none(
payload["capabilities"]
)
else:
profile.is_subscriber = False
profile.save()
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
@task
def process_event_password_change(event_id):
event = AccountEvent.objects.get(id=event_id)
event.status = AccountEvent.PROCESSED
event.save()
@task
def process_event_profile_change(event_id):
event = AccountEvent.objects.get(id=event_id)
try:
user = get_user_model().objects.get(username=event.fxa_uid)
profile = UserProfile.objects.get(user=user)
except get_user_model().DoesNotExist:
return
refresh_token = profile.fxa_refresh_token
if not refresh_token:
event.status = AccountEvent.IGNORED
event.save()
return
fxa = KumaOIDCAuthenticationBackend()
token_info = fxa.get_token(
{
"client_id": fxa.OIDC_RP_CLIENT_ID,
"client_secret": fxa.OIDC_RP_CLIENT_SECRET,
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"ttl": 60 * 5,
}
)
access_token = token_info.get("access_token")
user_info = fxa.get_userinfo(access_token, None, None)
fxa.update_user(user, user_info)
event.status = AccountEvent.EventStatus.PROCESSED
event.save()
| fiji-flo | 57285dcf43694852d10e68159136c39a3e63cb29 | e1cef9d866060531069a0a73405f42b012c61627 | Don't update capabilities for `is_staff`. (We need to clean this manually). | fiji-flo | 1 |
mdn/kuma | 8,070 | feat(notifications): process Content Updates via /update/ route | Part of https://github.com/mdn/yari-private/issues/981.
## Before
1. `/update/` created only BCD Update Notifications.
2. `/create/pr/` created a single Content Update Notification independently.
## After
1. `/update/` creates both BCD **and Content** Update Notifications.
2. `/create/pr/` creates a single Content Update Notification **by reusing the same logic**. | null | 2022-04-05 17:20:14+00:00 | 2022-04-07 13:35:40+00:00 | kuma/api/v1/plus/notifications.py | from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class CreatePRNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
repo: str = Field(..., alias="repo")
pr: int
@admin_router.post("/create/pr/", response={200: Ok, 400: NotOk, 401: NotOk})
def create_pr(request, body: CreatePRNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
content = f"Page updated (see PR!{body.repo.strip('/')}!{body.pr}!!)"
notification_data, _ = NotificationData.objects.get_or_create(
text=content, title=watchers[0].title, type="content", page_url=body.raw_url
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return 200, True
| from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class ContentUpdateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
pr_url: str = Field(..., alias="pr")
@admin_router.post(
"/update/content/",
response={200: Ok, 400: NotOk, 401: NotOk},
url_name="admin.update_content",
)
def update_content(request, body: ContentUpdateNotificationSchema):
try:
url = DocumentURL.normalize_uri(body.raw_url)
changes = [
{
"event": "content_updated",
"page_url": url,
"pr_url": body.pr_url,
}
]
process_changes(changes)
except Exception as e:
return 400, {"error": f"Error while processing PR: {repr(e)}"}
return 200, True
| caugner | e1cef9d866060531069a0a73405f42b012c61627 | a5247595160b84987621c7ef899b454e370db188 | Nice change. | Guyzeroth | 2 |
mdn/kuma | 8,070 | feat(notifications): process Content Updates via /update/ route | Part of https://github.com/mdn/yari-private/issues/981.
## Before
1. `/update/` created only BCD Update Notifications.
2. `/create/pr/` created a single Content Update Notification independently.
## After
1. `/update/` creates both BCD **and Content** Update Notifications.
2. `/create/pr/` creates a single Content Update Notification **by reusing the same logic**. | null | 2022-04-05 17:20:14+00:00 | 2022-04-07 13:35:40+00:00 | kuma/api/v1/plus/notifications.py | from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class CreatePRNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
repo: str = Field(..., alias="repo")
pr: int
@admin_router.post("/create/pr/", response={200: Ok, 400: NotOk, 401: NotOk})
def create_pr(request, body: CreatePRNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
content = f"Page updated (see PR!{body.repo.strip('/')}!{body.pr}!!)"
notification_data, _ = NotificationData.objects.get_or_create(
text=content, title=watchers[0].title, type="content", page_url=body.raw_url
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return 200, True
| from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class ContentUpdateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
pr_url: str = Field(..., alias="pr")
@admin_router.post(
"/update/content/",
response={200: Ok, 400: NotOk, 401: NotOk},
url_name="admin.update_content",
)
def update_content(request, body: ContentUpdateNotificationSchema):
try:
url = DocumentURL.normalize_uri(body.raw_url)
changes = [
{
"event": "content_updated",
"page_url": url,
"pr_url": body.pr_url,
}
]
process_changes(changes)
except Exception as e:
return 400, {"error": f"Error while processing PR: {repr(e)}"}
return 200, True
| caugner | e1cef9d866060531069a0a73405f42b012c61627 | a5247595160b84987621c7ef899b454e370db188 | Could this get a test? | Guyzeroth | 3 |
mdn/kuma | 8,070 | feat(notifications): process Content Updates via /update/ route | Part of https://github.com/mdn/yari-private/issues/981.
## Before
1. `/update/` created only BCD Update Notifications.
2. `/create/pr/` created a single Content Update Notification independently.
## After
1. `/update/` creates both BCD **and Content** Update Notifications.
2. `/create/pr/` creates a single Content Update Notification **by reusing the same logic**. | null | 2022-04-05 17:20:14+00:00 | 2022-04-07 13:35:40+00:00 | kuma/api/v1/plus/notifications.py | from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class CreatePRNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
repo: str = Field(..., alias="repo")
pr: int
@admin_router.post("/create/pr/", response={200: Ok, 400: NotOk, 401: NotOk})
def create_pr(request, body: CreatePRNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
content = f"Page updated (see PR!{body.repo.strip('/')}!{body.pr}!!)"
notification_data, _ = NotificationData.objects.get_or_create(
text=content, title=watchers[0].title, type="content", page_url=body.raw_url
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return 200, True
| from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class ContentUpdateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
pr_url: str = Field(..., alias="pr")
@admin_router.post(
"/update/content/",
response={200: Ok, 400: NotOk, 401: NotOk},
url_name="admin.update_content",
)
def update_content(request, body: ContentUpdateNotificationSchema):
try:
url = DocumentURL.normalize_uri(body.raw_url)
changes = [
{
"event": "content_updated",
"page_url": url,
"pr_url": body.pr_url,
}
]
process_changes(changes)
except Exception as e:
return 400, {"error": f"Error while processing PR: {repr(e)}"}
return 200, True
| caugner | e1cef9d866060531069a0a73405f42b012c61627 | a5247595160b84987621c7ef899b454e370db188 | Added in 8f35a2f40f3942051bccdc6180eac06b6bed6dcf, thanks for helping me out with this. | caugner | 4 |
mdn/kuma | 8,070 | feat(notifications): process Content Updates via /update/ route | Part of https://github.com/mdn/yari-private/issues/981.
## Before
1. `/update/` created only BCD Update Notifications.
2. `/create/pr/` created a single Content Update Notification independently.
## After
1. `/update/` creates both BCD **and Content** Update Notifications.
2. `/create/pr/` creates a single Content Update Notification **by reusing the same logic**. | null | 2022-04-05 17:20:14+00:00 | 2022-04-07 13:35:40+00:00 | kuma/api/v1/plus/notifications.py | from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class CreatePRNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
repo: str = Field(..., alias="repo")
pr: int
@admin_router.post("/create/pr/", response={200: Ok, 400: NotOk, 401: NotOk})
def create_pr(request, body: CreatePRNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
content = f"Page updated (see PR!{body.repo.strip('/')}!{body.pr}!!)"
notification_data, _ = NotificationData.objects.get_or_create(
text=content, title=watchers[0].title, type="content", page_url=body.raw_url
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return 200, True
| from __future__ import annotations
import datetime
import json
from typing import Optional
# import requests
import requests
from django.conf import settings
from django.db.models import Q
from django.middleware.csrf import get_token
from ninja import Field, Router
from ninja.pagination import paginate
from kuma.documenturls.models import DocumentURL
from kuma.notifications.models import (
DefaultWatch,
Notification,
NotificationData,
UserWatch,
Watch,
)
from kuma.notifications.utils import process_changes
from kuma.settings.common import MAX_NON_SUBSCRIBED
from kuma.users.models import UserProfile
from ..pagination import LimitOffsetPaginatedResponse, LimitOffsetPaginationWithMeta
from ..smarter_schema import Schema
admin_router = Router(tags=["admin"])
notifications_router = Router(tags=["notifications"])
watch_router = Router(tags=["watch"])
limit_offset_paginate_with_meta = paginate(LimitOffsetPaginationWithMeta)
class Ok(Schema):
ok: bool = True
class WatchUpdateResponse(Schema):
ok: bool = True
subscription_limit_reached: bool = False
class NotOk(Schema):
ok: bool = False
error: str
info: dict = None
class NotificationSchema(Schema):
id: int
title: str = Field(..., alias="notification.title")
text: str = Field(..., alias="notification.text")
url: str = Field(..., alias="notification.page_url")
created: datetime.datetime = Field(..., alias="notification.created")
deleted: bool
read: bool
starred: bool
@notifications_router.get(
"/",
response=LimitOffsetPaginatedResponse[NotificationSchema],
url_name="plus.notifications",
)
@limit_offset_paginate_with_meta
def notifications(
request,
starred: bool = None,
unread: bool = None,
filterType: str = None,
q: str = None,
sort: str = None,
**kwargs,
):
qs = request.user.notification_set.select_related("notification")
if starred is not None:
qs = qs.filter(starred=starred)
if unread is not None:
qs = qs.filter(read=not unread)
if filterType:
qs = qs.filter(notification__type=filterType)
if q:
qs = qs.filter(
Q(notification__title__icontains=q) | Q(notification__text__icontains=q)
)
if sort == "title":
order_by = "notification__title"
else:
order_by = "-notification__created"
qs = qs.order_by(order_by, "id")
qs = qs.filter(deleted=False)
return qs
@notifications_router.post("/all/mark-as-read/", response=Ok)
def mark_all_as_read(request):
request.user.notification_set.filter(read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/mark-as-read/", response=Ok)
def mark_as_read(request, pk: int):
request.user.notification_set.filter(pk=pk, read=False).update(read=True)
return True
@notifications_router.post("/{int:pk}/toggle-starred/", response={200: Ok, 400: str})
def toggle_starred(request, pk: int):
try:
notification = Notification.objects.get(user=request.user, pk=pk)
except Notification.DoesNotExist:
return 400, "no matching notification"
notification.starred = not notification.starred
notification.save()
return 200, True
class StarMany(Schema):
ids: list[int]
@notifications_router.post(
"/star-ids/", response={200: Ok, 400: str}, url_name="notifications_star_ids"
)
def star_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=True
)
return 200, True
@notifications_router.post(
"/unstar-ids/", response={200: Ok, 400: str}, url_name="notifications_unstar_ids"
)
def unstar_many(request, data: StarMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
starred=False
)
return 200, True
@notifications_router.post(
"/{int:pk}/delete/", response=Ok, url_name="notifications_delete_id"
)
def delete_notification(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=True)
return True
@notifications_router.post("/{int:pk}/undo-deletion/", response=Ok)
def undo_deletion(request, pk: int):
request.user.notification_set.filter(id=pk).update(deleted=False)
return True
class DeleteMany(Schema):
ids: list[int]
@notifications_router.post(
"/delete-ids/", response={200: Ok, 400: NotOk}, url_name="notifications_delete_many"
)
def delete_notifications(request, data: DeleteMany):
request.user.notification_set.filter(deleted=False).filter(pk__in=data.ids).update(
deleted=True
)
return 200, True
class WatchSchema(Schema):
title: str
url: str
path: str
@watch_router.get("/watching/", url_name="watching")
def watched(request, q: str = "", url: str = "", limit: int = 20, offset: int = 0):
qs = request.user.userwatch_set.select_related("watch", "user__defaultwatch")
profile: UserProfile = request.auth
hasDefault = None
try:
hasDefault = request.user.defaultwatch.custom_serialize()
except DefaultWatch.DoesNotExist:
pass
if url:
url = DocumentURL.normalize_uri(url)
qs = qs.filter(watch__url=url)
if q:
qs = qs.filter(watch__title__icontains=q)
qs = qs[offset : offset + limit]
response = {}
results = []
# Default settings at top level if exist
if hasDefault:
response["default"] = hasDefault
response["csrfmiddlewaretoken"] = get_token(request)
for item in qs:
res = {}
res["title"] = item.watch.title
res["url"] = item.watch.url
res["path"] = item.watch.path
# No custom notifications just major updates.
if not item.custom:
res["status"] = "major"
else:
res["status"] = "custom"
# Subscribed to custom
if item.custom_default and hasDefault:
# Subscribed to the defaults
res["custom"] = "default"
else:
# Subscribed to fine-grained options
res["custom"] = item.custom_serialize()
results.append(res)
if url != "" and len(results) == 0:
response["status"] = "unwatched"
elif len(results) == 1 and url != "":
response = response | results[0]
else:
response["items"] = results
if not profile.is_subscriber:
response["subscription_limit_reached"] = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return response
class UpdateWatchCustom(Schema):
compatibility: list[str]
content: bool
class UpdateWatch(Schema):
unwatch: bool = None
title: str = None
path: str = None
custom: UpdateWatchCustom = None
custom_default: bool = None
update_custom_default: bool = False
@watch_router.post(
"/watching/", response={200: WatchUpdateResponse, 400: NotOk, 400: NotOk}
)
def update_watch(request, url: str, data: UpdateWatch):
url = DocumentURL.normalize_uri(url)
profile: UserProfile = request.auth
watched: Optional[UserWatch] = (
request.user.userwatch_set.select_related("watch", "user__defaultwatch")
.filter(watch__url=url)
.first()
)
user = watched.user if watched else request.user
watched_count = request.user.userwatch_set.count()
subscription_limit_reached = watched_count >= MAX_NON_SUBSCRIBED["notification"]
if data.unwatch:
if watched:
watched.delete()
subscription_limit_reached = (watched_count - 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
return 200, {
"subscription_limit_reached": subscription_limit_reached,
"ok": True,
}
title = data.title
if not title:
return 400, {"error": "missing title"}
path = data.path or ""
watched_data = {"custom": data.custom is not None}
if data.custom:
custom_default = bool(data.custom_default)
watched_data["custom_default"] = custom_default
custom_data = {
"content_updates": data.custom.content,
}
custom_data["browser_compatibility"] = sorted(data.custom.compatibility)
if custom_default:
try:
default_watch = user.defaultwatch
if data.update_custom_default:
for key, value in custom_data.items():
setattr(default_watch, key, value)
default_watch.save()
except DefaultWatch.DoesNotExist:
# Always create custom defaults if they are missing.
DefaultWatch.objects.update_or_create(user=user, defaults=custom_data)
watched_data.update(custom_data)
if watched:
watch: Watch = watched.watch
# Update the title / path if they changed.
if title != watch.title or path != watch.path:
watch.title = title
watch.path = path
watch.save()
else:
# Check on creation if allowed.
if (
watched_count >= MAX_NON_SUBSCRIBED["notification"]
and not profile.is_subscriber
):
return 400, {
"error": "max_subscriptions",
"info": {"max_allowed": MAX_NON_SUBSCRIBED["notification"]},
}
watch = Watch.objects.get_or_create(url=url, title=title, path=path)[0]
subscription_limit_reached = (watched_count + 1) >= MAX_NON_SUBSCRIBED[
"notification"
]
user.userwatch_set.update_or_create(watch=watch, defaults=watched_data)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class UnwatchMany(Schema):
unwatch: list[str]
@watch_router.post(
"/unwatch-many/",
response={200: WatchUpdateResponse, 400: NotOk},
url_name="unwatch_many",
)
def unwatch(request, data: UnwatchMany):
request.user.userwatch_set.select_related("watch", "user__watch").filter(
watch__url__in=data.unwatch
).delete()
profile: UserProfile = request.auth
if profile.is_subscriber:
subscription_limit_reached = False
else:
subscription_limit_reached = (
request.user.userwatch_set.count() >= MAX_NON_SUBSCRIBED["notification"]
)
return 200, {"subscription_limit_reached": subscription_limit_reached, "ok": True}
class CreateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
title: str
text: str
@admin_router.post("/create/", response={200: Ok, 400: NotOk})
def create(request, body: CreateNotificationSchema):
url = DocumentURL.normalize_uri(body.raw_url)
watchers = Watch.objects.filter(url=url)
if not watchers:
return 400, {"error": "No watchers found"}
notification_data, _ = NotificationData.objects.get_or_create(
text=body.text, title=body.title, type="content"
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
return True
class UpdateNotificationSchema(Schema):
filename: str
@admin_router.post("/update/", response={200: Ok, 400: NotOk, 401: NotOk})
def update(request, body: UpdateNotificationSchema):
try:
changes = json.loads(
requests.get(settings.NOTIFICATIONS_CHANGES_URL + body.filename).content
)
except Exception:
return 400, {"error": "Error while processing file"}
try:
process_changes(changes)
except Exception:
return 400, {"ok": False, "error": "Error while processing file"}
return 200, True
class ContentUpdateNotificationSchema(Schema):
raw_url: str = Field(..., alias="page")
pr_url: str = Field(..., alias="pr")
@admin_router.post(
"/update/content/",
response={200: Ok, 400: NotOk, 401: NotOk},
url_name="admin.update_content",
)
def update_content(request, body: ContentUpdateNotificationSchema):
try:
url = DocumentURL.normalize_uri(body.raw_url)
changes = [
{
"event": "content_updated",
"page_url": url,
"pr_url": body.pr_url,
}
]
process_changes(changes)
except Exception as e:
return 400, {"error": f"Error while processing PR: {repr(e)}"}
return 200, True
| caugner | e1cef9d866060531069a0a73405f42b012c61627 | a5247595160b84987621c7ef899b454e370db188 | ```suggestion
def update_content(request, body: ContentUpdateNotificationSchema):
``` | caugner | 5 |
mdn/kuma | 8,070 | feat(notifications): process Content Updates via /update/ route | Part of https://github.com/mdn/yari-private/issues/981.
## Before
1. `/update/` created only BCD Update Notifications.
2. `/create/pr/` created a single Content Update Notification independently.
## After
1. `/update/` creates both BCD **and Content** Update Notifications.
2. `/create/pr/` creates a single Content Update Notification **by reusing the same logic**. | null | 2022-04-05 17:20:14+00:00 | 2022-04-07 13:35:40+00:00 | kuma/notifications/utils.py | from collections import defaultdict
from kuma.notifications.browsers import browsers
from kuma.notifications.models import Notification, NotificationData, Watch
def publish_notification(path, text, dry_run=False, data=None):
# This traverses down the path to see if there's top level watchers
parts = path.split(".")
suffix = []
while len(parts) > 0:
subpath = ".".join(parts)
watcher = Watch.objects.filter(path=subpath).first()
suffix.append(parts.pop())
if not watcher:
continue
# Add the suffix based on the path to the title.
# Since suffix contains the current title (which should be an exact match)
# we use the suffix as title (after reversing the order).
title = reversed(suffix)
title = ".".join(title)
if not dry_run:
notification_data, _ = NotificationData.objects.get_or_create(
title=title,
text=text,
data=data,
type="compat",
page_url=watcher.url,
)
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def get_browser_info(browser, preview=False):
name = browsers.get(browser, {"name": browser}).get("name", "")
if preview:
return browsers.get(browser, {"preview_name": browser, "name": browser}).get(
"preview_name", name
)
return name
def pluralize(browser_list):
if len(browser_list) == 1:
return browser_list[0]
else:
return ", ".join(browser_list[:-1]) + f", and {browser_list[-1]}"
BROWSER_GROUP = {
"firefox": "firefox",
"firefox_android": "firefox",
"chrome": "chrome",
"chrome_android": "chrome",
"edge": "chrome",
"webview_android": "chrome",
"deno": "deno",
"safari": "safari",
"safari_ios": "safari",
"ie": "ie",
"nodejs": "nodejs",
"opera": "opera",
"opera_android": "opera",
"samsunginternet_android": "samsunginternet_android",
}
COPY = {
"added_stable": "Supported in ",
"removed_stable": "Removed from ",
"added_preview": "In development in ",
}
def process_changes(changes, dry_run=False):
notifications = []
for change in changes:
if change["event"] in ["added_stable", "removed_stable", "added_preview"]:
groups = defaultdict(list)
for browser_data in change["browsers"]:
browser = get_browser_info(
browser_data["browser"],
change["event"] == "added_preview",
)
groups[BROWSER_GROUP.get(browser_data["browser"], browser)].append(
{
"browser": f"{browser} {browser_data['version']}",
"data": change,
}
)
for group in groups.values():
browser_list = pluralize([i["browser"] for i in group])
notifications.append(
{
"path": change["path"],
"text": COPY[change["event"]] + browser_list,
"data": [i["data"] for i in group],
}
)
elif change["event"] == "added_subfeatures":
n = len(change["subfeatures"])
notifications.append(
{
"path": change["path"],
"text": f"{n} compatibility subfeature{'s'[:n ^ 1]} added",
"data": change,
}
)
elif change["event"] == "added_nonnull":
browser_list = [
get_browser_info(i["browser"]) for i in change["support_changes"]
]
text = pluralize(browser_list)
notifications.append(
{
"path": change["path"],
"text": f"More complete compatibility data added for {text}",
"data": change,
}
)
for notification in notifications:
publish_notification(**notification, dry_run=dry_run)
| import re
from collections import defaultdict
from kuma.documenturls.models import DocumentURL
from kuma.notifications.browsers import browsers
from kuma.notifications.models import Notification, NotificationData, Watch
def publish_bcd_notification(path, text, data=None):
# This traverses down the path to see if there's top level watchers
parts = path.split(".")
suffix = []
while len(parts) > 0:
subpath = ".".join(parts)
watcher = Watch.objects.filter(path=subpath).first()
suffix.append(parts.pop())
if not watcher:
continue
# Add the suffix based on the path to the title.
# Since suffix contains the current title (which should be an exact match)
# we use the suffix as title (after reversing the order).
title = reversed(suffix)
title = ".".join(title)
notification_data, _ = NotificationData.objects.get_or_create(
title=title,
text=text,
data=data,
type="compat",
page_url=watcher.url,
)
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def get_browser_info(browser, preview=False):
name = browsers.get(browser, {"name": browser}).get("name", "")
if preview:
return browsers.get(browser, {"preview_name": browser, "name": browser}).get(
"preview_name", name
)
return name
def pluralize(browser_list):
if len(browser_list) == 1:
return browser_list[0]
else:
return ", ".join(browser_list[:-1]) + f", and {browser_list[-1]}"
BROWSER_GROUP = {
"firefox": "firefox",
"firefox_android": "firefox",
"chrome": "chrome",
"chrome_android": "chrome",
"edge": "chrome",
"webview_android": "chrome",
"deno": "deno",
"safari": "safari",
"safari_ios": "safari",
"ie": "ie",
"nodejs": "nodejs",
"opera": "opera",
"opera_android": "opera",
"samsunginternet_android": "samsunginternet_android",
}
COPY = {
"added_stable": "Supported in ",
"removed_stable": "Removed from ",
"added_preview": "In development in ",
}
def publish_content_notification(url, text):
watchers = Watch.objects.filter(url=url)
if not watchers:
return
notification_data, _ = NotificationData.objects.get_or_create(
text=text, title=watchers[0].title, type="content", page_url=url
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def process_changes(changes):
bcd_notifications = []
content_notifications = []
for change in changes:
if change["event"] in ["added_stable", "removed_stable", "added_preview"]:
groups = defaultdict(list)
for browser_data in change["browsers"]:
browser = get_browser_info(
browser_data["browser"],
change["event"] == "added_preview",
)
groups[BROWSER_GROUP.get(browser_data["browser"], browser)].append(
{
"browser": f"{browser} {browser_data['version']}",
"data": change,
}
)
for group in groups.values():
browser_list = pluralize([i["browser"] for i in group])
bcd_notifications.append(
{
"path": change["path"],
"text": COPY[change["event"]] + browser_list,
"data": [i["data"] for i in group],
}
)
elif change["event"] == "added_subfeatures":
n = len(change["subfeatures"])
bcd_notifications.append(
{
"path": change["path"],
"text": f"{n} compatibility subfeature{'s'[:n ^ 1]} added",
"data": change,
}
)
elif change["event"] == "added_nonnull":
browser_list = [
get_browser_info(i["browser"]) for i in change["support_changes"]
]
text = pluralize(browser_list)
bcd_notifications.append(
{
"path": change["path"],
"text": f"More complete compatibility data added for {text}",
"data": change,
}
)
elif change["event"] == "content_updated":
url = DocumentURL.normalize_uri(change["page_url"])
m = re.match(r"^https://github.com/(.+)/pull/(\d+)$", change["pr_url"])
content_notifications.append(
{
"url": url,
"text": f"Page updated (see PR!{m.group(1)}!{m.group(2)}!!)",
}
)
for notification in bcd_notifications:
publish_bcd_notification(**notification)
for notification in content_notifications:
publish_content_notification(**notification)
| caugner | e1cef9d866060531069a0a73405f42b012c61627 | a5247595160b84987621c7ef899b454e370db188 | Is dry_run something we are interested in maintaining? | Guyzeroth | 6 |
mdn/kuma | 8,070 | feat(notifications): process Content Updates via /update/ route | Part of https://github.com/mdn/yari-private/issues/981.
## Before
1. `/update/` created only BCD Update Notifications.
2. `/create/pr/` created a single Content Update Notification independently.
## After
1. `/update/` creates both BCD **and Content** Update Notifications.
2. `/create/pr/` creates a single Content Update Notification **by reusing the same logic**. | null | 2022-04-05 17:20:14+00:00 | 2022-04-07 13:35:40+00:00 | kuma/notifications/utils.py | from collections import defaultdict
from kuma.notifications.browsers import browsers
from kuma.notifications.models import Notification, NotificationData, Watch
def publish_notification(path, text, dry_run=False, data=None):
# This traverses down the path to see if there's top level watchers
parts = path.split(".")
suffix = []
while len(parts) > 0:
subpath = ".".join(parts)
watcher = Watch.objects.filter(path=subpath).first()
suffix.append(parts.pop())
if not watcher:
continue
# Add the suffix based on the path to the title.
# Since suffix contains the current title (which should be an exact match)
# we use the suffix as title (after reversing the order).
title = reversed(suffix)
title = ".".join(title)
if not dry_run:
notification_data, _ = NotificationData.objects.get_or_create(
title=title,
text=text,
data=data,
type="compat",
page_url=watcher.url,
)
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def get_browser_info(browser, preview=False):
name = browsers.get(browser, {"name": browser}).get("name", "")
if preview:
return browsers.get(browser, {"preview_name": browser, "name": browser}).get(
"preview_name", name
)
return name
def pluralize(browser_list):
if len(browser_list) == 1:
return browser_list[0]
else:
return ", ".join(browser_list[:-1]) + f", and {browser_list[-1]}"
BROWSER_GROUP = {
"firefox": "firefox",
"firefox_android": "firefox",
"chrome": "chrome",
"chrome_android": "chrome",
"edge": "chrome",
"webview_android": "chrome",
"deno": "deno",
"safari": "safari",
"safari_ios": "safari",
"ie": "ie",
"nodejs": "nodejs",
"opera": "opera",
"opera_android": "opera",
"samsunginternet_android": "samsunginternet_android",
}
COPY = {
"added_stable": "Supported in ",
"removed_stable": "Removed from ",
"added_preview": "In development in ",
}
def process_changes(changes, dry_run=False):
notifications = []
for change in changes:
if change["event"] in ["added_stable", "removed_stable", "added_preview"]:
groups = defaultdict(list)
for browser_data in change["browsers"]:
browser = get_browser_info(
browser_data["browser"],
change["event"] == "added_preview",
)
groups[BROWSER_GROUP.get(browser_data["browser"], browser)].append(
{
"browser": f"{browser} {browser_data['version']}",
"data": change,
}
)
for group in groups.values():
browser_list = pluralize([i["browser"] for i in group])
notifications.append(
{
"path": change["path"],
"text": COPY[change["event"]] + browser_list,
"data": [i["data"] for i in group],
}
)
elif change["event"] == "added_subfeatures":
n = len(change["subfeatures"])
notifications.append(
{
"path": change["path"],
"text": f"{n} compatibility subfeature{'s'[:n ^ 1]} added",
"data": change,
}
)
elif change["event"] == "added_nonnull":
browser_list = [
get_browser_info(i["browser"]) for i in change["support_changes"]
]
text = pluralize(browser_list)
notifications.append(
{
"path": change["path"],
"text": f"More complete compatibility data added for {text}",
"data": change,
}
)
for notification in notifications:
publish_notification(**notification, dry_run=dry_run)
| import re
from collections import defaultdict
from kuma.documenturls.models import DocumentURL
from kuma.notifications.browsers import browsers
from kuma.notifications.models import Notification, NotificationData, Watch
def publish_bcd_notification(path, text, data=None):
# This traverses down the path to see if there's top level watchers
parts = path.split(".")
suffix = []
while len(parts) > 0:
subpath = ".".join(parts)
watcher = Watch.objects.filter(path=subpath).first()
suffix.append(parts.pop())
if not watcher:
continue
# Add the suffix based on the path to the title.
# Since suffix contains the current title (which should be an exact match)
# we use the suffix as title (after reversing the order).
title = reversed(suffix)
title = ".".join(title)
notification_data, _ = NotificationData.objects.get_or_create(
title=title,
text=text,
data=data,
type="compat",
page_url=watcher.url,
)
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def get_browser_info(browser, preview=False):
name = browsers.get(browser, {"name": browser}).get("name", "")
if preview:
return browsers.get(browser, {"preview_name": browser, "name": browser}).get(
"preview_name", name
)
return name
def pluralize(browser_list):
if len(browser_list) == 1:
return browser_list[0]
else:
return ", ".join(browser_list[:-1]) + f", and {browser_list[-1]}"
BROWSER_GROUP = {
"firefox": "firefox",
"firefox_android": "firefox",
"chrome": "chrome",
"chrome_android": "chrome",
"edge": "chrome",
"webview_android": "chrome",
"deno": "deno",
"safari": "safari",
"safari_ios": "safari",
"ie": "ie",
"nodejs": "nodejs",
"opera": "opera",
"opera_android": "opera",
"samsunginternet_android": "samsunginternet_android",
}
COPY = {
"added_stable": "Supported in ",
"removed_stable": "Removed from ",
"added_preview": "In development in ",
}
def publish_content_notification(url, text):
watchers = Watch.objects.filter(url=url)
if not watchers:
return
notification_data, _ = NotificationData.objects.get_or_create(
text=text, title=watchers[0].title, type="content", page_url=url
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def process_changes(changes):
bcd_notifications = []
content_notifications = []
for change in changes:
if change["event"] in ["added_stable", "removed_stable", "added_preview"]:
groups = defaultdict(list)
for browser_data in change["browsers"]:
browser = get_browser_info(
browser_data["browser"],
change["event"] == "added_preview",
)
groups[BROWSER_GROUP.get(browser_data["browser"], browser)].append(
{
"browser": f"{browser} {browser_data['version']}",
"data": change,
}
)
for group in groups.values():
browser_list = pluralize([i["browser"] for i in group])
bcd_notifications.append(
{
"path": change["path"],
"text": COPY[change["event"]] + browser_list,
"data": [i["data"] for i in group],
}
)
elif change["event"] == "added_subfeatures":
n = len(change["subfeatures"])
bcd_notifications.append(
{
"path": change["path"],
"text": f"{n} compatibility subfeature{'s'[:n ^ 1]} added",
"data": change,
}
)
elif change["event"] == "added_nonnull":
browser_list = [
get_browser_info(i["browser"]) for i in change["support_changes"]
]
text = pluralize(browser_list)
bcd_notifications.append(
{
"path": change["path"],
"text": f"More complete compatibility data added for {text}",
"data": change,
}
)
elif change["event"] == "content_updated":
url = DocumentURL.normalize_uri(change["page_url"])
m = re.match(r"^https://github.com/(.+)/pull/(\d+)$", change["pr_url"])
content_notifications.append(
{
"url": url,
"text": f"Page updated (see PR!{m.group(1)}!{m.group(2)}!!)",
}
)
for notification in bcd_notifications:
publish_bcd_notification(**notification)
for notification in content_notifications:
publish_content_notification(**notification)
| caugner | e1cef9d866060531069a0a73405f42b012c61627 | a5247595160b84987621c7ef899b454e370db188 | +1 for removing that. | fiji-flo | 7 |
mdn/kuma | 8,070 | feat(notifications): process Content Updates via /update/ route | Part of https://github.com/mdn/yari-private/issues/981.
## Before
1. `/update/` created only BCD Update Notifications.
2. `/create/pr/` created a single Content Update Notification independently.
## After
1. `/update/` creates both BCD **and Content** Update Notifications.
2. `/create/pr/` creates a single Content Update Notification **by reusing the same logic**. | null | 2022-04-05 17:20:14+00:00 | 2022-04-07 13:35:40+00:00 | kuma/notifications/utils.py | from collections import defaultdict
from kuma.notifications.browsers import browsers
from kuma.notifications.models import Notification, NotificationData, Watch
def publish_notification(path, text, dry_run=False, data=None):
# This traverses down the path to see if there's top level watchers
parts = path.split(".")
suffix = []
while len(parts) > 0:
subpath = ".".join(parts)
watcher = Watch.objects.filter(path=subpath).first()
suffix.append(parts.pop())
if not watcher:
continue
# Add the suffix based on the path to the title.
# Since suffix contains the current title (which should be an exact match)
# we use the suffix as title (after reversing the order).
title = reversed(suffix)
title = ".".join(title)
if not dry_run:
notification_data, _ = NotificationData.objects.get_or_create(
title=title,
text=text,
data=data,
type="compat",
page_url=watcher.url,
)
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def get_browser_info(browser, preview=False):
name = browsers.get(browser, {"name": browser}).get("name", "")
if preview:
return browsers.get(browser, {"preview_name": browser, "name": browser}).get(
"preview_name", name
)
return name
def pluralize(browser_list):
if len(browser_list) == 1:
return browser_list[0]
else:
return ", ".join(browser_list[:-1]) + f", and {browser_list[-1]}"
BROWSER_GROUP = {
"firefox": "firefox",
"firefox_android": "firefox",
"chrome": "chrome",
"chrome_android": "chrome",
"edge": "chrome",
"webview_android": "chrome",
"deno": "deno",
"safari": "safari",
"safari_ios": "safari",
"ie": "ie",
"nodejs": "nodejs",
"opera": "opera",
"opera_android": "opera",
"samsunginternet_android": "samsunginternet_android",
}
COPY = {
"added_stable": "Supported in ",
"removed_stable": "Removed from ",
"added_preview": "In development in ",
}
def process_changes(changes, dry_run=False):
notifications = []
for change in changes:
if change["event"] in ["added_stable", "removed_stable", "added_preview"]:
groups = defaultdict(list)
for browser_data in change["browsers"]:
browser = get_browser_info(
browser_data["browser"],
change["event"] == "added_preview",
)
groups[BROWSER_GROUP.get(browser_data["browser"], browser)].append(
{
"browser": f"{browser} {browser_data['version']}",
"data": change,
}
)
for group in groups.values():
browser_list = pluralize([i["browser"] for i in group])
notifications.append(
{
"path": change["path"],
"text": COPY[change["event"]] + browser_list,
"data": [i["data"] for i in group],
}
)
elif change["event"] == "added_subfeatures":
n = len(change["subfeatures"])
notifications.append(
{
"path": change["path"],
"text": f"{n} compatibility subfeature{'s'[:n ^ 1]} added",
"data": change,
}
)
elif change["event"] == "added_nonnull":
browser_list = [
get_browser_info(i["browser"]) for i in change["support_changes"]
]
text = pluralize(browser_list)
notifications.append(
{
"path": change["path"],
"text": f"More complete compatibility data added for {text}",
"data": change,
}
)
for notification in notifications:
publish_notification(**notification, dry_run=dry_run)
| import re
from collections import defaultdict
from kuma.documenturls.models import DocumentURL
from kuma.notifications.browsers import browsers
from kuma.notifications.models import Notification, NotificationData, Watch
def publish_bcd_notification(path, text, data=None):
# This traverses down the path to see if there's top level watchers
parts = path.split(".")
suffix = []
while len(parts) > 0:
subpath = ".".join(parts)
watcher = Watch.objects.filter(path=subpath).first()
suffix.append(parts.pop())
if not watcher:
continue
# Add the suffix based on the path to the title.
# Since suffix contains the current title (which should be an exact match)
# we use the suffix as title (after reversing the order).
title = reversed(suffix)
title = ".".join(title)
notification_data, _ = NotificationData.objects.get_or_create(
title=title,
text=text,
data=data,
type="compat",
page_url=watcher.url,
)
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def get_browser_info(browser, preview=False):
name = browsers.get(browser, {"name": browser}).get("name", "")
if preview:
return browsers.get(browser, {"preview_name": browser, "name": browser}).get(
"preview_name", name
)
return name
def pluralize(browser_list):
if len(browser_list) == 1:
return browser_list[0]
else:
return ", ".join(browser_list[:-1]) + f", and {browser_list[-1]}"
BROWSER_GROUP = {
"firefox": "firefox",
"firefox_android": "firefox",
"chrome": "chrome",
"chrome_android": "chrome",
"edge": "chrome",
"webview_android": "chrome",
"deno": "deno",
"safari": "safari",
"safari_ios": "safari",
"ie": "ie",
"nodejs": "nodejs",
"opera": "opera",
"opera_android": "opera",
"samsunginternet_android": "samsunginternet_android",
}
COPY = {
"added_stable": "Supported in ",
"removed_stable": "Removed from ",
"added_preview": "In development in ",
}
def publish_content_notification(url, text):
watchers = Watch.objects.filter(url=url)
if not watchers:
return
notification_data, _ = NotificationData.objects.get_or_create(
text=text, title=watchers[0].title, type="content", page_url=url
)
for watcher in watchers:
# considering the possibility of multiple pages existing for the same path
for user in watcher.users.all():
Notification.objects.create(notification=notification_data, user=user)
def process_changes(changes):
bcd_notifications = []
content_notifications = []
for change in changes:
if change["event"] in ["added_stable", "removed_stable", "added_preview"]:
groups = defaultdict(list)
for browser_data in change["browsers"]:
browser = get_browser_info(
browser_data["browser"],
change["event"] == "added_preview",
)
groups[BROWSER_GROUP.get(browser_data["browser"], browser)].append(
{
"browser": f"{browser} {browser_data['version']}",
"data": change,
}
)
for group in groups.values():
browser_list = pluralize([i["browser"] for i in group])
bcd_notifications.append(
{
"path": change["path"],
"text": COPY[change["event"]] + browser_list,
"data": [i["data"] for i in group],
}
)
elif change["event"] == "added_subfeatures":
n = len(change["subfeatures"])
bcd_notifications.append(
{
"path": change["path"],
"text": f"{n} compatibility subfeature{'s'[:n ^ 1]} added",
"data": change,
}
)
elif change["event"] == "added_nonnull":
browser_list = [
get_browser_info(i["browser"]) for i in change["support_changes"]
]
text = pluralize(browser_list)
bcd_notifications.append(
{
"path": change["path"],
"text": f"More complete compatibility data added for {text}",
"data": change,
}
)
elif change["event"] == "content_updated":
url = DocumentURL.normalize_uri(change["page_url"])
m = re.match(r"^https://github.com/(.+)/pull/(\d+)$", change["pr_url"])
content_notifications.append(
{
"url": url,
"text": f"Page updated (see PR!{m.group(1)}!{m.group(2)}!!)",
}
)
for notification in bcd_notifications:
publish_bcd_notification(**notification)
for notification in content_notifications:
publish_content_notification(**notification)
| caugner | e1cef9d866060531069a0a73405f42b012c61627 | a5247595160b84987621c7ef899b454e370db188 | Fixed in 049a72725. | caugner | 8 |
mdn/kuma | 8,026 | Renew access token every 12 hours. | null | null | 2022-01-12 11:37:13+00:00 | 2022-01-12 12:50:08+00:00 | kuma/users/auth.py | import time
from django.conf import settings
from django.contrib.auth import get_user_model
from mozilla_django_oidc.auth import OIDCAuthenticationBackend
from .models import UserProfile
class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend):
"""Extend mozilla-django-oidc authbackend."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.refresh_token = None
def get_token(self, payload):
"""Override get_token to extract the refresh token."""
token_info = super().get_token(payload)
self.refresh_token = token_info.get("refresh_token")
return token_info
def filter_users_by_claims(self, claims):
user_model = get_user_model()
if not (fxa_uid := claims.get("sub")):
return user_model.objects.none()
return user_model.objects.filter(username=fxa_uid)
def create_user(self, claims):
user = super().create_user(claims)
self._create_or_set_user_profile(user, claims)
self.request.created = True
return user
def update_user(self, user, claims):
self._create_or_set_user_profile(user, claims)
return user
def get_username(self, claims):
"""Get the username from the claims."""
# use the fxa_uid as the username
return claims.get("sub", claims.get("uid"))
@staticmethod
def create_or_update_subscriber(claims, user=None):
"""Retrieve or create a user with a profile.
Static helper method that routes requests that are not part of the login flow
"""
email = claims.get("email")
fxa_uid = claims.get("sub", claims.get("uid"))
if not fxa_uid:
return
try:
# short-circuit if we already have a user
user = user or get_user_model().objects.get(username=fxa_uid)
except get_user_model().DoesNotExist:
user = get_user_model().objects.create_user(email=email, username=fxa_uid)
# update the email if needed
if email and user.email != email:
user.email = email
# toggle user status based on subscriptions
user.is_active = True
user.save()
profile, _ = UserProfile.objects.get_or_create(user=user)
if avatar := claims.get("avatar"):
profile.avatar = avatar
profile.is_subscriber = settings.MDN_PLUS_SUBSCRIPTION in claims.get(
"subscriptions", []
) or settings.MDN_PLUS_SUBSCRIPTION == claims.get("fxa-subscriptions", "")
profile.save()
return user
def _create_or_set_user_profile(self, user, claims):
"""Update user and profile attributes."""
user = self.create_or_update_subscriber(claims, user)
if self.refresh_token:
UserProfile.objects.filter(user=user).update(
fxa_refresh_token=self.refresh_token
)
def logout_url(request):
"""This gets called by mozilla_django_oidc when a user has signed out."""
return (
request.GET.get("next")
or request.session.get("oidc_login_next")
or getattr(settings, "LOGOUT_REDIRECT_URL", None)
or "/"
)
def is_authorized_request(token, **kwargs):
auth = token.split()
if auth[0].lower() != "bearer":
return {"error": "invalid token type"}
jwt_token = auth[1]
if not (payload := KumaOIDCAuthenticationBackend().verify_token(jwt_token)):
return {"error": "invalid token"}
issuer = payload["iss"]
exp = payload["exp"]
# # If the issuer is not Firefox Accounts log an error
if settings.FXA_TOKEN_ISSUER != issuer:
return {"error": "invalid token issuer"}
# Check if the token is expired
if exp < time.time():
return {"error": "token expired"}
return payload
| import time
import requests
from django.conf import settings
from django.contrib.auth import get_user_model
from mozilla_django_oidc.auth import OIDCAuthenticationBackend
from .models import UserProfile
class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend):
"""Extend mozilla-django-oidc authbackend."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.refresh_token = None
def get_token(self, payload):
"""Override get_token to extract the refresh token."""
token_info = super().get_token(payload)
self.refresh_token = token_info.get("refresh_token")
return token_info
@classmethod
def refresh_access_token(cls, refresh_token, ttl=None):
"""Gets a new access_token by using a refresh_token.
returns: the actual token or an empty dictionary
"""
if not refresh_token:
return {}
obj = cls()
payload = {
"client_id": obj.OIDC_RP_CLIENT_ID,
"client_secret": obj.OIDC_RP_CLIENT_SECRET,
"grant_type": "refresh_token",
"refresh_token": refresh_token,
}
if ttl:
payload.update({"ttl": ttl})
try:
return obj.get_token(payload=payload)
except requests.exceptions.HTTPError:
return {}
def filter_users_by_claims(self, claims):
user_model = get_user_model()
if not (fxa_uid := claims.get("sub")):
return user_model.objects.none()
return user_model.objects.filter(username=fxa_uid)
def create_user(self, claims):
user = super().create_user(claims)
self._create_or_set_user_profile(user, claims)
self.request.created = True
return user
def update_user(self, user, claims):
self._create_or_set_user_profile(user, claims)
return user
def get_username(self, claims):
"""Get the username from the claims."""
# use the fxa_uid as the username
return claims.get("sub", claims.get("uid"))
@staticmethod
def create_or_update_subscriber(claims, user=None):
"""Retrieve or create a user with a profile.
Static helper method that routes requests that are not part of the login flow
"""
email = claims.get("email")
fxa_uid = claims.get("sub", claims.get("uid"))
if not fxa_uid:
return
try:
# short-circuit if we already have a user
user = user or get_user_model().objects.get(username=fxa_uid)
except get_user_model().DoesNotExist:
user = get_user_model().objects.create_user(email=email, username=fxa_uid)
# update the email if needed
if email and user.email != email:
user.email = email
# toggle user status based on subscriptions
user.is_active = True
user.save()
profile, _ = UserProfile.objects.get_or_create(user=user)
if avatar := claims.get("avatar"):
profile.avatar = avatar
profile.is_subscriber = settings.MDN_PLUS_SUBSCRIPTION in claims.get(
"subscriptions", []
) or settings.MDN_PLUS_SUBSCRIPTION == claims.get("fxa-subscriptions", "")
profile.save()
return user
def _create_or_set_user_profile(self, user, claims):
"""Update user and profile attributes."""
user = self.create_or_update_subscriber(claims, user)
if self.refresh_token:
UserProfile.objects.filter(user=user).update(
fxa_refresh_token=self.refresh_token
)
def logout_url(request):
"""This gets called by mozilla_django_oidc when a user has signed out."""
return (
request.GET.get("next")
or request.session.get("oidc_login_next")
or getattr(settings, "LOGOUT_REDIRECT_URL", None)
or "/"
)
def is_authorized_request(token, **kwargs):
auth = token.split()
if auth[0].lower() != "bearer":
return {"error": "invalid token type"}
jwt_token = auth[1]
if not (payload := KumaOIDCAuthenticationBackend().verify_token(jwt_token)):
return {"error": "invalid token"}
issuer = payload["iss"]
exp = payload["exp"]
# # If the issuer is not Firefox Accounts log an error
if settings.FXA_TOKEN_ISSUER != issuer:
return {"error": "invalid token issuer"}
# Check if the token is expired
if exp < time.time():
return {"error": "token expired"}
return payload
| akatsoulas | 20857c3ed1eef5bcbe2774da660c4b93ca9d9d42 | 0ab25fbff2725d12c9fe2e8c83089236639193d6 | ```suggestion
import time
import requests
from django.conf import settings
``` | fiji-flo | 9 |
mdn/kuma | 8,026 | Renew access token every 12 hours. | null | null | 2022-01-12 11:37:13+00:00 | 2022-01-12 12:50:08+00:00 | kuma/users/middleware.py | import time
import requests
from django.conf import settings
from django.contrib.auth import logout
from django.core.exceptions import MiddlewareNotUsed
from mozilla_django_oidc.middleware import SessionRefresh
class ValidateAccessTokenMiddleware(SessionRefresh):
"""Validate the access token every hour.
Verify that the access token has not been invalidated
by the user through the Firefox Accounts web interface.
"""
def __init__(self, *args, **kwargs):
if settings.DEV and settings.DEBUG:
raise MiddlewareNotUsed
super().__init__(*args, **kwargs)
def process_request(self, request):
if not self.is_refreshable_url(request):
return
expiration = request.session.get("oidc_id_token_expiration", 0)
now = time.time()
access_token = request.session.get("oidc_access_token")
if access_token and expiration < now:
response_token_info = (
requests.post(settings.FXA_VERIFY_URL, data={"token": access_token})
).json()
# if the token is not verified, log the user out
if (
response_token_info.get("code") == 400
and response_token_info.get("message") == "Invalid token"
):
profile = request.user.userprofile
profile.fxa_refresh_token = ""
profile.save()
logout(request)
else:
request.session["oidc_id_token_expiration"] = (
now + settings.FXA_TOKEN_EXPIRY
)
| import time
from django.conf import settings
from django.contrib.auth import logout
from django.core.exceptions import MiddlewareNotUsed
from mozilla_django_oidc.middleware import SessionRefresh
from kuma.users.auth import KumaOIDCAuthenticationBackend
class ValidateAccessTokenMiddleware(SessionRefresh):
"""Validate the access token every hour.
Verify that the access token has not been invalidated
by the user through the Firefox Accounts web interface.
"""
def __init__(self, *args, **kwargs):
if settings.DEV and settings.DEBUG:
raise MiddlewareNotUsed
super().__init__(*args, **kwargs)
def process_request(self, request):
if not self.is_refreshable_url(request):
return
expiration = request.session.get("oidc_id_token_expiration", 0)
now = time.time()
access_token = request.session.get("oidc_access_token")
profile = request.user.userprofile
if access_token and expiration < now:
token_info = KumaOIDCAuthenticationBackend.refresh_access_token(
profile.fxa_refresh_token
)
new_access_token = token_info.get("access_token")
if new_access_token:
request.session["oidc_access_token"] = new_access_token
request.session["oidc_id_token_expiration"] = (
now + settings.FXA_TOKEN_EXPIRY
)
else:
profile.fxa_refresh_token = ""
profile.save()
logout(request)
| akatsoulas | 20857c3ed1eef5bcbe2774da660c4b93ca9d9d42 | 0ab25fbff2725d12c9fe2e8c83089236639193d6 | ```suggestion
token_info = KumaOIDCAuthenticationBackend.refresh_access_token(
profile.fxa_refresh_token
)
new_access_token = token_info.get("access_token")
``` | fiji-flo | 10 |
mdn/kuma | 8,020 | support core users | null | null | 2021-12-15 19:51:40+00:00 | 2021-12-16 17:31:33+00:00 | kuma/api/v1/views.py | from django.conf import settings
from django.http import HttpResponseForbidden, JsonResponse
from django.middleware.csrf import get_token
from django.views.decorators.cache import never_cache
from django.views.decorators.http import require_GET
from kuma.api.v1.forms import AccountSettingsForm
from kuma.users.models import UserProfile
@never_cache
@require_GET
def whoami(request):
"""
Return a JSON object representing the current user, either
authenticated or anonymous.
"""
data = {}
user = request.user
cloudfront_country_header = "HTTP_CLOUDFRONT_VIEWER_COUNTRY_NAME"
cloudfront_country_value = request.META.get(cloudfront_country_header)
if cloudfront_country_value:
data.update({"geo": {"country": cloudfront_country_value}})
if not user.is_authenticated:
return JsonResponse(data)
data = {
"username": user.username,
"is_authenticated": True,
"email": user.email,
}
if user.is_staff:
data["is_staff"] = True
if user.is_superuser:
data["is_superuser"] = True
if user.is_active:
data["is_subscriber"] = True
try:
profile = UserProfile.objects.get(user=user)
except UserProfile.DoesNotExist:
profile = None
if profile:
data["avatar_url"] = profile.avatar
return JsonResponse(data)
@never_cache
def account_settings(request):
user = request.user
if not user.is_authenticated:
return HttpResponseForbidden("not signed in")
for user_profile in UserProfile.objects.filter(user=user):
break
else:
user_profile = None
if request.method == "DELETE":
user.delete()
return JsonResponse({"deleted": True})
elif request.method == "POST":
form = AccountSettingsForm(request.POST)
if not form.is_valid():
return JsonResponse({"errors": form.errors.get_json_data()}, status=400)
set_locale = None
if form.cleaned_data.get("locale"):
set_locale = form.cleaned_data["locale"]
if user_profile:
user_profile.locale = set_locale
user_profile.save()
else:
user_profile = UserProfile.objects.create(user=user, locale=set_locale)
response = JsonResponse({"ok": True})
if set_locale:
response.set_cookie(
key=settings.LANGUAGE_COOKIE_NAME,
value=set_locale,
max_age=settings.LANGUAGE_COOKIE_AGE,
path=settings.LANGUAGE_COOKIE_PATH,
domain=settings.LANGUAGE_COOKIE_DOMAIN,
secure=settings.LANGUAGE_COOKIE_SECURE,
)
return response
context = {
"csrfmiddlewaretoken": get_token(request),
"locale": user_profile.locale if user_profile else None,
}
return JsonResponse(context)
| from django.conf import settings
from django.http import HttpResponseForbidden, JsonResponse
from django.middleware.csrf import get_token
from django.views.decorators.cache import never_cache
from django.views.decorators.http import require_GET
from kuma.api.v1.forms import AccountSettingsForm
from kuma.users.models import UserProfile
@never_cache
@require_GET
def whoami(request):
"""
Return a JSON object representing the current user, either
authenticated or anonymous.
"""
data = {}
user = request.user
cloudfront_country_header = "HTTP_CLOUDFRONT_VIEWER_COUNTRY_NAME"
cloudfront_country_value = request.META.get(cloudfront_country_header)
if cloudfront_country_value:
data.update({"geo": {"country": cloudfront_country_value}})
if not user.is_authenticated:
return JsonResponse(data)
data = {
"username": user.username,
"is_authenticated": True,
"email": user.email,
}
if user.is_staff:
data["is_staff"] = True
if user.is_superuser:
data["is_superuser"] = True
try:
profile = UserProfile.objects.get(user=user)
except UserProfile.DoesNotExist:
profile = None
if profile:
data["avatar_url"] = profile.avatar
data["is_subscriber"] = profile.is_subscriber
return JsonResponse(data)
@never_cache
def account_settings(request):
user = request.user
if not user.is_authenticated:
return HttpResponseForbidden("not signed in")
for user_profile in UserProfile.objects.filter(user=user):
break
else:
user_profile = None
if request.method == "DELETE":
user.delete()
return JsonResponse({"deleted": True})
elif request.method == "POST":
form = AccountSettingsForm(request.POST)
if not form.is_valid():
return JsonResponse({"errors": form.errors.get_json_data()}, status=400)
set_locale = None
if form.cleaned_data.get("locale"):
set_locale = form.cleaned_data["locale"]
if user_profile:
user_profile.locale = set_locale
user_profile.save()
else:
user_profile = UserProfile.objects.create(user=user, locale=set_locale)
response = JsonResponse({"ok": True})
if set_locale:
response.set_cookie(
key=settings.LANGUAGE_COOKIE_NAME,
value=set_locale,
max_age=settings.LANGUAGE_COOKIE_AGE,
path=settings.LANGUAGE_COOKIE_PATH,
domain=settings.LANGUAGE_COOKIE_DOMAIN,
secure=settings.LANGUAGE_COOKIE_SECURE,
)
return response
context = {
"csrfmiddlewaretoken": get_token(request),
"locale": user_profile.locale if user_profile else None,
}
return JsonResponse(context)
| fiji-flo | cbda569e9a2cd16b8a6d16bafb721e6dd2769467 | 053c45db24a549999707650a93e3808238521b40 | nit: Since this is getting a value of `True` you could just assign `data['is_subscriber'] = profile.is_subscriber` to remove the `if` clause | akatsoulas | 11 |
mdn/kuma | 8,020 | support core users | null | null | 2021-12-15 19:51:40+00:00 | 2021-12-16 17:31:33+00:00 | kuma/users/auth.py | import time
from django.conf import settings
from django.contrib.auth import get_user_model
from mozilla_django_oidc.auth import OIDCAuthenticationBackend
from .models import UserProfile
class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend):
"""Extend mozilla-django-oidc authbackend."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.refresh_token = None
def get_token(self, payload):
"""Override get_token to extract the refresh token."""
token_info = super().get_token(payload)
self.refresh_token = token_info.get("refresh_token")
return token_info
def filter_users_by_claims(self, claims):
user_model = get_user_model()
if not (fxa_uid := claims.get("sub")):
return user_model.objects.none()
return user_model.objects.filter(username=fxa_uid)
def create_user(self, claims):
if (
not (subscriptions := claims.get("subscriptions"))
or settings.MDN_PLUS_SUBSCRIPTION not in subscriptions
):
return None
user = super().create_user(claims)
self._create_or_set_user_profile(user, claims)
return user
def update_user(self, user, claims):
self._create_or_set_user_profile(user, claims)
return user
def get_username(self, claims):
"""Get the username from the claims."""
# use the fxa_uid as the username
return claims.get("sub")
@staticmethod
def create_or_update_subscriber(claims, user=None):
"""Retrieve or create a user with a profile.
Static helper method that routes requests that are not part of the login flow
"""
email = claims.get("email")
fxa_uid = claims.get("sub")
if not fxa_uid:
return
try:
# short-circuit if we already have a user
user = user or get_user_model().objects.get(username=fxa_uid)
except get_user_model().DoesNotExist:
user = get_user_model().objects.create_user(email=email, username=fxa_uid)
# update the email if needed
if email and user.email != email:
user.email = email
# toggle user status based on subscriptions
user.is_active = settings.MDN_PLUS_SUBSCRIPTION in claims.get(
"subscriptions", []
) or settings.MDN_PLUS_SUBSCRIPTION == claims.get("fxa-subscriptions", "")
user.save()
profile, _ = UserProfile.objects.get_or_create(user=user)
if avatar := claims.get("avatar"):
profile.avatar = avatar
profile.save()
return user
def _create_or_set_user_profile(self, user, claims):
"""Update user and profile attributes."""
user = self.create_or_update_subscriber(claims, user)
if self.refresh_token:
UserProfile.objects.filter(user=user).update(
fxa_refresh_token=self.refresh_token
)
def logout_url(request):
"""This gets called by mozilla_django_oidc when a user has signed out."""
return (
request.GET.get("next")
or request.session.get("oidc_login_next")
or getattr(settings, "LOGOUT_REDIRECT_URL", None)
or "/"
)
def is_authorized_request(token, **kwargs):
auth = token.split()
if auth[0].lower() != "bearer":
return {"error": "invalid token type"}
jwt_token = auth[1]
if not (payload := KumaOIDCAuthenticationBackend().verify_token(jwt_token)):
return {"error": "invalid token"}
issuer = payload["iss"]
exp = payload["exp"]
# # If the issuer is not Firefox Accounts log an error
if settings.FXA_TOKEN_ISSUER != issuer:
return {"error": "invalid token issuer"}
# Check if the token is expired
if exp < time.time():
return {"error": "token expired"}
# check if there is a valid subscription
if payload.get("fxa-subscriptions", "") != settings.MDN_PLUS_SUBSCRIPTION:
return {"error": "not a subscriber"}
return payload
| import time
from django.conf import settings
from django.contrib.auth import get_user_model
from mozilla_django_oidc.auth import OIDCAuthenticationBackend
from .models import UserProfile
class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend):
"""Extend mozilla-django-oidc authbackend."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.refresh_token = None
def get_token(self, payload):
"""Override get_token to extract the refresh token."""
token_info = super().get_token(payload)
self.refresh_token = token_info.get("refresh_token")
return token_info
def filter_users_by_claims(self, claims):
user_model = get_user_model()
if not (fxa_uid := claims.get("sub")):
return user_model.objects.none()
return user_model.objects.filter(username=fxa_uid)
def create_user(self, claims):
user = super().create_user(claims)
self._create_or_set_user_profile(user, claims)
self.request.created = True
return user
def update_user(self, user, claims):
self._create_or_set_user_profile(user, claims)
return user
def get_username(self, claims):
"""Get the username from the claims."""
# use the fxa_uid as the username
return claims.get("sub")
@staticmethod
def create_or_update_subscriber(claims, user=None):
"""Retrieve or create a user with a profile.
Static helper method that routes requests that are not part of the login flow
"""
email = claims.get("email")
fxa_uid = claims.get("sub")
if not fxa_uid:
return
try:
# short-circuit if we already have a user
user = user or get_user_model().objects.get(username=fxa_uid)
except get_user_model().DoesNotExist:
user = get_user_model().objects.create_user(email=email, username=fxa_uid)
# update the email if needed
if email and user.email != email:
user.email = email
# toggle user status based on subscriptions
user.is_active = True
user.save()
profile, _ = UserProfile.objects.get_or_create(user=user)
if avatar := claims.get("avatar"):
profile.avatar = avatar
profile.is_subscriber = settings.MDN_PLUS_SUBSCRIPTION in claims.get(
"subscriptions", []
) or settings.MDN_PLUS_SUBSCRIPTION == claims.get("fxa-subscriptions", "")
profile.save()
return user
def _create_or_set_user_profile(self, user, claims):
"""Update user and profile attributes."""
user = self.create_or_update_subscriber(claims, user)
if self.refresh_token:
UserProfile.objects.filter(user=user).update(
fxa_refresh_token=self.refresh_token
)
def logout_url(request):
"""This gets called by mozilla_django_oidc when a user has signed out."""
return (
request.GET.get("next")
or request.session.get("oidc_login_next")
or getattr(settings, "LOGOUT_REDIRECT_URL", None)
or "/"
)
def is_authorized_request(token, **kwargs):
auth = token.split()
if auth[0].lower() != "bearer":
return {"error": "invalid token type"}
jwt_token = auth[1]
if not (payload := KumaOIDCAuthenticationBackend().verify_token(jwt_token)):
return {"error": "invalid token"}
issuer = payload["iss"]
exp = payload["exp"]
# # If the issuer is not Firefox Accounts log an error
if settings.FXA_TOKEN_ISSUER != issuer:
return {"error": "invalid token issuer"}
# Check if the token is expired
if exp < time.time():
return {"error": "token expired"}
# check if there is a valid subscription
if payload.get("fxa-subscriptions", "") != settings.MDN_PLUS_SUBSCRIPTION:
return {"error": "not a subscriber"}
return payload
| fiji-flo | cbda569e9a2cd16b8a6d16bafb721e6dd2769467 | 053c45db24a549999707650a93e3808238521b40 | super nit: You can omit completely and change the check in the CallbackView to
`if self.request.get("created") and not is_subscriber` | akatsoulas | 12 |
mdn/kuma | 8,020 | support core users | null | null | 2021-12-15 19:51:40+00:00 | 2021-12-16 17:31:33+00:00 | kuma/users/models.py | import json
from django.contrib.auth import get_user_model
from django.db import models
class UserProfile(models.Model):
user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE)
locale = models.CharField(max_length=6, null=True)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
avatar = models.URLField(max_length=512, blank=True, default="")
fxa_refresh_token = models.CharField(blank=True, default="", max_length=128)
class Meta:
verbose_name = "User profile"
def __str__(self):
return json.dumps(
{
"uid": self.user.username,
"is_subscriber": self.user.is_active,
"email": self.user.email,
"avatar": self.avatar,
}
)
| import json
from django.contrib.auth import get_user_model
from django.db import models
class UserProfile(models.Model):
user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE)
locale = models.CharField(max_length=6, null=True)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
avatar = models.URLField(max_length=512, blank=True, default="")
fxa_refresh_token = models.CharField(blank=True, default="", max_length=128)
is_subscriber = models.BooleanField(default=False)
class Meta:
verbose_name = "User profile"
def __str__(self):
return json.dumps(
{
"uid": self.user.username,
"is_subscriber": self.user.is_subscriber,
"email": self.user.email,
"avatar": self.avatar,
}
)
| fiji-flo | cbda569e9a2cd16b8a6d16bafb721e6dd2769467 | 053c45db24a549999707650a93e3808238521b40 | Shouldn't this be `self.is_subscriber`? | akatsoulas | 13 |
mdn/kuma | 8,020 | support core users | null | null | 2021-12-15 19:51:40+00:00 | 2021-12-16 17:31:33+00:00 | kuma/users/models.py | import json
from django.contrib.auth import get_user_model
from django.db import models
class UserProfile(models.Model):
user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE)
locale = models.CharField(max_length=6, null=True)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
avatar = models.URLField(max_length=512, blank=True, default="")
fxa_refresh_token = models.CharField(blank=True, default="", max_length=128)
class Meta:
verbose_name = "User profile"
def __str__(self):
return json.dumps(
{
"uid": self.user.username,
"is_subscriber": self.user.is_active,
"email": self.user.email,
"avatar": self.avatar,
}
)
| import json
from django.contrib.auth import get_user_model
from django.db import models
class UserProfile(models.Model):
user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE)
locale = models.CharField(max_length=6, null=True)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
avatar = models.URLField(max_length=512, blank=True, default="")
fxa_refresh_token = models.CharField(blank=True, default="", max_length=128)
is_subscriber = models.BooleanField(default=False)
class Meta:
verbose_name = "User profile"
def __str__(self):
return json.dumps(
{
"uid": self.user.username,
"is_subscriber": self.user.is_subscriber,
"email": self.user.email,
"avatar": self.avatar,
}
)
| fiji-flo | cbda569e9a2cd16b8a6d16bafb721e6dd2769467 | 053c45db24a549999707650a93e3808238521b40 | good catch but I just pushed it 👍 | fiji-flo | 14 |
dbcli/litecli | 165 | Add key binding to accept completion with the right-arrow key | ## Description
zsh-autosuggestions uses right-arrow to select a suggestion. I found that I end up hitting right-arrow often when using litecli. This change adds a key binding for right-arrow to accept a completion.
## Checklist
<!--- We appreciate your help and want to give you credit. Please take a moment to put an `x` in the boxes below as you complete them. -->
- [ ] I've added this contribution to the `CHANGELOG.md` file.
| null | 2023-09-27 15:26:47+00:00 | 2023-09-27 15:33:32+00:00 | tests/test_smart_completion_public_schema_only.py | # coding: utf-8
from __future__ import unicode_literals
import pytest
from mock import patch
from prompt_toolkit.completion import Completion
from prompt_toolkit.document import Document
metadata = {
"users": ["id", "email", "first_name", "last_name"],
"orders": ["id", "ordered_date", "status"],
"select": ["id", "insert", "ABC"],
"réveillé": ["id", "insert", "ABC"],
}
@pytest.fixture
def completer():
import litecli.sqlcompleter as sqlcompleter
comp = sqlcompleter.SQLCompleter()
tables, columns = [], []
for table, cols in metadata.items():
tables.append((table,))
columns.extend([(table, col) for col in cols])
comp.set_dbname("test")
comp.extend_schemata("test")
comp.extend_relations(tables, kind="tables")
comp.extend_columns(columns, kind="tables")
return comp
@pytest.fixture
def complete_event():
from mock import Mock
return Mock()
def test_empty_string_completion(completer, complete_event):
text = ""
position = 0
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert list(map(Completion, sorted(completer.keywords))) == result
def test_select_keyword_completion(completer, complete_event):
text = "SEL"
position = len("SEL")
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list([Completion(text="SELECT", start_position=-3)])
def test_table_completion(completer, complete_event):
text = "SELECT * FROM "
position = len(text)
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list(
[
Completion(text="`réveillé`", start_position=0),
Completion(text="`select`", start_position=0),
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_function_name_completion(completer, complete_event):
text = "SELECT MA"
position = len("SELECT MA")
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list(
[
Completion(text="MAX", start_position=-2),
Completion(text="MATCH", start_position=-2),
]
)
def test_suggested_column_names(completer, complete_event):
"""Suggest column and function names when selecting from table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT from users"
position = len("SELECT ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="users", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def test_suggested_column_names_in_function(completer, complete_event):
"""Suggest column and function names when selecting multiple columns from
table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT MAX( from users"
position = len("SELECT MAX(")
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_column_names_with_table_dot(completer, complete_event):
"""Suggest column names on table name and dot.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT users. from users"
position = len("SELECT users.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_column_names_with_alias(completer, complete_event):
"""Suggest column names on table alias and dot.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT u. from users u"
position = len("SELECT u.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_multiple_column_names(completer, complete_event):
"""Suggest column and function names when selecting multiple columns from
table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT id, from users u"
position = len("SELECT id, ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="u", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def test_suggested_multiple_column_names_with_alias(completer, complete_event):
"""Suggest column names on table alias and dot when selecting multiple
columns from table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT u.id, u. from users u"
position = len("SELECT u.id, u.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_multiple_column_names_with_dot(completer, complete_event):
"""Suggest column names on table names and dot when selecting multiple
columns from table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT users.id, users. from users u"
position = len("SELECT users.id, users.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_aliases_after_on(completer, complete_event):
text = "SELECT u.name, o.id FROM users u JOIN orders o ON "
position = len("SELECT u.name, o.id FROM users u JOIN orders o ON ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[Completion(text="o", start_position=0), Completion(text="u", start_position=0)]
)
def test_suggested_aliases_after_on_right_side(completer, complete_event):
text = "SELECT u.name, o.id FROM users u JOIN orders o ON o.user_id = "
position = len("SELECT u.name, o.id FROM users u JOIN orders o ON o.user_id = ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[Completion(text="o", start_position=0), Completion(text="u", start_position=0)]
)
def test_suggested_tables_after_on(completer, complete_event):
text = "SELECT users.name, orders.id FROM users JOIN orders ON "
position = len("SELECT users.name, orders.id FROM users JOIN orders ON ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_suggested_tables_after_on_right_side(completer, complete_event):
text = "SELECT users.name, orders.id FROM users JOIN orders ON orders.user_id = "
position = len(
"SELECT users.name, orders.id FROM users JOIN orders ON orders.user_id = "
)
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert list(result) == list(
[
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_table_names_after_from(completer, complete_event):
text = "SELECT * FROM "
position = len("SELECT * FROM ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert list(result) == list(
[
Completion(text="`réveillé`", start_position=0),
Completion(text="`select`", start_position=0),
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_auto_escaped_col_names(completer, complete_event):
text = "SELECT from `select`"
position = len("SELECT ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert (
result
== [
Completion(text="*", start_position=0),
Completion(text="`ABC`", start_position=0),
Completion(text="`insert`", start_position=0),
Completion(text="id", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="`select`", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def test_un_escaped_table_names(completer, complete_event):
text = "SELECT from réveillé"
position = len("SELECT ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="`ABC`", start_position=0),
Completion(text="`insert`", start_position=0),
Completion(text="id", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="réveillé", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def dummy_list_path(dir_name):
dirs = {
"/": ["dir1", "file1.sql", "file2.sql"],
"/dir1": ["subdir1", "subfile1.sql", "subfile2.sql"],
"/dir1/subdir1": ["lastfile.sql"],
}
return dirs.get(dir_name, [])
@patch("litecli.packages.filepaths.list_path", new=dummy_list_path)
@pytest.mark.parametrize(
"text,expected",
[
("source ", [(".", 0), ("..", 0), ("/", 0), ("~", 0)]),
("source /", [("dir1", 0), ("file1.sql", 0), ("file2.sql", 0)]),
("source /dir1/", [("subdir1", 0), ("subfile1.sql", 0), ("subfile2.sql", 0)]),
("source /dir1/subdir1/", [("lastfile.sql", 0)]),
],
)
def test_file_name_completion(completer, complete_event, text, expected):
position = len(text)
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
expected = list([Completion(txt, pos) for txt, pos in expected])
assert result == expected
| # coding: utf-8
from __future__ import unicode_literals
import pytest
from mock import patch
from prompt_toolkit.completion import Completion
from prompt_toolkit.document import Document
metadata = {
"users": ["id", "email", "first_name", "last_name"],
"orders": ["id", "ordered_date", "status"],
"select": ["id", "insert", "ABC"],
"réveillé": ["id", "insert", "ABC"],
}
@pytest.fixture
def completer():
import litecli.sqlcompleter as sqlcompleter
comp = sqlcompleter.SQLCompleter()
tables, columns = [], []
for table, cols in metadata.items():
tables.append((table,))
columns.extend([(table, col) for col in cols])
comp.set_dbname("test")
comp.extend_schemata("test")
comp.extend_relations(tables, kind="tables")
comp.extend_columns(columns, kind="tables")
return comp
@pytest.fixture
def complete_event():
from mock import Mock
return Mock()
def test_empty_string_completion(completer, complete_event):
text = ""
position = 0
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert list(map(Completion, sorted(completer.keywords))) == result
def test_select_keyword_completion(completer, complete_event):
text = "SEL"
position = len("SEL")
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list([Completion(text="SELECT", start_position=-3)])
def test_table_completion(completer, complete_event):
text = "SELECT * FROM "
position = len(text)
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list(
[
Completion(text="`réveillé`", start_position=0),
Completion(text="`select`", start_position=0),
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_function_name_completion(completer, complete_event):
text = "SELECT MA"
position = len("SELECT MA")
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list(
[
Completion(text="MAX", start_position=-2),
Completion(text="MATCH", start_position=-2),
]
)
def test_suggested_column_names(completer, complete_event):
"""Suggest column and function names when selecting from table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT from users"
position = len("SELECT ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="users", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def test_suggested_column_names_in_function(completer, complete_event):
"""Suggest column and function names when selecting multiple columns from
table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT MAX( from users"
position = len("SELECT MAX(")
result = completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
assert list(result) == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_column_names_with_table_dot(completer, complete_event):
"""Suggest column names on table name and dot.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT users. from users"
position = len("SELECT users.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_column_names_with_alias(completer, complete_event):
"""Suggest column names on table alias and dot.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT u. from users u"
position = len("SELECT u.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_multiple_column_names(completer, complete_event):
"""Suggest column and function names when selecting multiple columns from
table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT id, from users u"
position = len("SELECT id, ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="u", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def test_suggested_multiple_column_names_with_alias(completer, complete_event):
"""Suggest column names on table alias and dot when selecting multiple
columns from table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT u.id, u. from users u"
position = len("SELECT u.id, u.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_multiple_column_names_with_dot(completer, complete_event):
"""Suggest column names on table names and dot when selecting multiple
columns from table.
:param completer:
:param complete_event:
:return:
"""
text = "SELECT users.id, users. from users u"
position = len("SELECT users.id, users.")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="email", start_position=0),
Completion(text="first_name", start_position=0),
Completion(text="id", start_position=0),
Completion(text="last_name", start_position=0),
]
)
def test_suggested_aliases_after_on(completer, complete_event):
text = "SELECT u.name, o.id FROM users u JOIN orders o ON "
position = len("SELECT u.name, o.id FROM users u JOIN orders o ON ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[Completion(text="o", start_position=0), Completion(text="u", start_position=0)]
)
def test_suggested_aliases_after_on_right_side(completer, complete_event):
text = "SELECT u.name, o.id FROM users u JOIN orders o ON o.user_id = "
position = len("SELECT u.name, o.id FROM users u JOIN orders o ON o.user_id = ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[Completion(text="o", start_position=0), Completion(text="u", start_position=0)]
)
def test_suggested_tables_after_on(completer, complete_event):
text = "SELECT users.name, orders.id FROM users JOIN orders ON "
position = len("SELECT users.name, orders.id FROM users JOIN orders ON ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_suggested_tables_after_on_right_side(completer, complete_event):
text = "SELECT users.name, orders.id FROM users JOIN orders ON orders.user_id = "
position = len(
"SELECT users.name, orders.id FROM users JOIN orders ON orders.user_id = "
)
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert list(result) == list(
[
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_table_names_after_from(completer, complete_event):
text = "SELECT * FROM "
position = len("SELECT * FROM ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert list(result) == list(
[
Completion(text="`réveillé`", start_position=0),
Completion(text="`select`", start_position=0),
Completion(text="orders", start_position=0),
Completion(text="users", start_position=0),
]
)
def test_auto_escaped_col_names(completer, complete_event):
text = "SELECT from `select`"
position = len("SELECT ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert (
result
== [
Completion(text="*", start_position=0),
Completion(text="`ABC`", start_position=0),
Completion(text="`insert`", start_position=0),
Completion(text="id", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="select", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def test_un_escaped_table_names(completer, complete_event):
text = "SELECT from réveillé"
position = len("SELECT ")
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
assert result == list(
[
Completion(text="*", start_position=0),
Completion(text="`ABC`", start_position=0),
Completion(text="`insert`", start_position=0),
Completion(text="id", start_position=0),
]
+ list(map(Completion, completer.functions))
+ [Completion(text="réveillé", start_position=0)]
+ list(map(Completion, sorted(completer.keywords)))
)
def dummy_list_path(dir_name):
dirs = {
"/": ["dir1", "file1.sql", "file2.sql"],
"/dir1": ["subdir1", "subfile1.sql", "subfile2.sql"],
"/dir1/subdir1": ["lastfile.sql"],
}
return dirs.get(dir_name, [])
@patch("litecli.packages.filepaths.list_path", new=dummy_list_path)
@pytest.mark.parametrize(
"text,expected",
[
("source ", [(".", 0), ("..", 0), ("/", 0), ("~", 0)]),
("source /", [("dir1", 0), ("file1.sql", 0), ("file2.sql", 0)]),
("source /dir1/", [("subdir1", 0), ("subfile1.sql", 0), ("subfile2.sql", 0)]),
("source /dir1/subdir1/", [("lastfile.sql", 0)]),
],
)
def test_file_name_completion(completer, complete_event, text, expected):
position = len(text)
result = list(
completer.get_completions(
Document(text=text, cursor_position=position), complete_event
)
)
expected = list([Completion(txt, pos) for txt, pos in expected])
assert result == expected
| liamhennebury | 5975d2010278fda42aa224be5770113fc15ee28f | fbaf48d9f6d9c33794ca25876f851b6318c402a6 | Pre-existing unit test failure, unrelated to change. | liamhennebury | 0 |
dbcli/litecli | 160 | Fixing startupcommands and successful perpetually set as True | ## Description
### Startup commands
As seen in https://github.com/dbcli/litecli/issues/56 there is a wish for having the option to define a set of commands that are to be executed on startup of litecli. For my case this would for instance be `.tables` - as I always seem to forget their names.
Startupcommands are set in liteclirc, I chose this rather than a new rc file to keep things simple.
```
# Startup commands
# litecli commands or sqlite commands to be executed on startup.
# some of them will require you to have a database attached.
# they will be executed in the same order as they appear in the list.
[startup_commands]
#commands = ".tables", "pragma foreign_keys = ON;"
```
As I wanted to keep in line with the rest of the codebase the commands are executed using `sqlexecute.run(command)` in the `startup_commands()` function that loops through all the startup commands at startup. To facilitate for helpful error messaging I have also added `check_if_sqlitedotcommand(command)` that is invoked inside `sqlexecute.run()` to check if the command the user is trying to execute is indeed a valid dotcommand, however it is not implemented yet. I though this would be nice so the users doesn't question whether they had a spelling error in their command in those cases.
With this implementation there may be a wish to be able to utilize more of the dot commands or other special commands from SQLite - should we implement more of these into LiteCLI? Or is it to whish to not expand the range of special commands offered?
### Successfull perpetually set as True
Queries are amongst others logged into `self.query_history` which can also be accessed through `self.get_last_query`. Here they are stored as `"Query", ["query", "successful", "mutating"]`.
`successful` is set [in main.py](https://github.com/dbcli/litecli/blob/main/litecli/main.py#L441) , `successful = False`. This was undoubtedly with the intention to set `successful = True` at successful execution of queries. However I have found that failing queries are set as successful simply because the program will continue to run after the intial ` res = sqlexecute.run(text)` and thus set `successful = True`
Testcase, hardcoded, from [this line](https://github.com/dbcli/litecli/blob/main/litecli/main.py#L441):
successful = False
start = time()
text = "this-is-not-a-query;"
res = sqlexecute.run(text)
self.formatter.query = text
successful = True
My fix is to only set successful = True in the subsequent execution logic, often as an `else:` in the `try/except:` statements.
If the intention was that the definition of successful = True is that a query is handled in any way this pull request can be disregarded, however please consider this interpretation of the logic as that makes it easier to access failed queries.
## Checklist
<!--- We appreciate your help and want to give you credit. Please take a moment to put an `x` in the boxes below as you complete them. -->
- [x] I've added this contribution to the `CHANGELOG.md` file.
Note: One test is failing, this is unrelated to these changes as it fails on the code from the main branch as well, see [issue 153](https://github.com/dbcli/litecli/issues/153) | null | 2023-05-03 21:01:28+00:00 | 2023-05-12 01:02:31+00:00 | tests/liteclirc | [main]
# Multi-line mode allows breaking up the sql statements into multiple lines. If
# this is set to True, then the end of the statements must have a semi-colon.
# If this is set to False then sql statements can't be split into multiple
# lines. End of line (return) is considered as the end of the statement.
multi_line = False
# Destructive warning mode will alert you before executing a sql statement
# that may cause harm to the database such as "drop table", "drop database"
# or "shutdown".
destructive_warning = True
# log_file location.
# In Unix/Linux: ~/.config/litecli/log
# In Windows: %USERPROFILE%\AppData\Local\dbcli\litecli\log
# %USERPROFILE% is typically C:\Users\{username}
log_file = default
# Default log level. Possible values: "CRITICAL", "ERROR", "WARNING", "INFO"
# and "DEBUG". "NONE" disables logging.
log_level = INFO
# Log every query and its results to a file. Enable this by uncommenting the
# line below.
# audit_log = ~/.litecli-audit.log
# Default pager.
# By default '$PAGER' environment variable is used
# pager = less -SRXF
# Table format. Possible values:
# ascii, double, github, psql, plain, simple, grid, fancy_grid, pipe, orgtbl,
# rst, mediawiki, html, latex, latex_booktabs, textile, moinmoin, jira,
# vertical, tsv, csv.
# Recommended: ascii
table_format = ascii
# Syntax coloring style. Possible values (many support the "-dark" suffix):
# manni, igor, xcode, vim, autumn, vs, rrt, native, perldoc, borland, tango, emacs,
# friendly, monokai, paraiso, colorful, murphy, bw, pastie, paraiso, trac, default,
# fruity.
# Screenshots at http://mycli.net/syntax
syntax_style = default
# Keybindings: Possible values: emacs, vi.
# Emacs mode: Ctrl-A is home, Ctrl-E is end. All emacs keybindings are available in the REPL.
# When Vi mode is enabled you can use modal editing features offered by Vi in the REPL.
key_bindings = emacs
# Enabling this option will show the suggestions in a wider menu. Thus more items are suggested.
wider_completion_menu = False
# Autocompletion is on by default. This can be truned off by setting this
# option to False. Pressing tab will still trigger completion.
autocompletion = True
# litecli prompt
# \D - The full current date
# \d - Database name
# \f - File basename of the "main" database
# \m - Minutes of the current time
# \n - Newline
# \P - AM/PM
# \R - The current time, in 24-hour military time (0-23)
# \r - The current time, standard 12-hour time (1-12)
# \s - Seconds of the current time
# \x1b[...m - insert ANSI escape sequence
prompt = "\t :\d> "
prompt_continuation = "-> "
# Show/hide the informational toolbar with function keymap at the footer.
show_bottom_toolbar = True
# Skip intro info on startup and outro info on exit
less_chatty = False
# Use alias from --login-path instead of host name in prompt
login_path_as_host = False
# Cause result sets to be displayed vertically if they are too wide for the current window,
# and using normal tabular format otherwise. (This applies to statements terminated by ; or \G.)
auto_vertical_output = False
# keyword casing preference. Possible values "lower", "upper", "auto"
keyword_casing = auto
# disabled pager on startup
enable_pager = True
[colors]
completion-menu.completion.current = "bg:#ffffff #000000"
completion-menu.completion = "bg:#008888 #ffffff"
completion-menu.meta.completion.current = "bg:#44aaaa #000000"
completion-menu.meta.completion = "bg:#448888 #ffffff"
completion-menu.multi-column-meta = "bg:#aaffff #000000"
scrollbar.arrow = "bg:#003333"
scrollbar = "bg:#00aaaa"
selected = "#ffffff bg:#6666aa"
search = "#ffffff bg:#4444aa"
search.current = "#ffffff bg:#44aa44"
bottom-toolbar = "bg:#222222 #aaaaaa"
bottom-toolbar.off = "bg:#222222 #888888"
bottom-toolbar.on = "bg:#222222 #ffffff"
search-toolbar = noinherit bold
search-toolbar.text = nobold
system-toolbar = noinherit bold
arg-toolbar = noinherit bold
arg-toolbar.text = nobold
bottom-toolbar.transaction.valid = "bg:#222222 #00ff5f bold"
bottom-toolbar.transaction.failed = "bg:#222222 #ff005f bold"
# style classes for colored table output
output.header = "#00ff5f bold"
output.odd-row = ""
output.even-row = ""
Token.Menu.Completions.Completion.Current = "bg:#00aaaa #000000"
Token.Menu.Completions.Completion = "bg:#008888 #ffffff"
Token.Menu.Completions.MultiColumnMeta = "bg:#aaffff #000000"
Token.Menu.Completions.ProgressButton = "bg:#003333"
Token.Menu.Completions.ProgressBar = "bg:#00aaaa"
Token.Output.Header = bold
Token.Output.OddRow = ""
Token.Output.EvenRow = ""
Token.SelectedText = "#ffffff bg:#6666aa"
Token.SearchMatch = "#ffffff bg:#4444aa"
Token.SearchMatch.Current = "#ffffff bg:#44aa44"
Token.Toolbar = "bg:#222222 #aaaaaa"
Token.Toolbar.Off = "bg:#222222 #888888"
Token.Toolbar.On = "bg:#222222 #ffffff"
Token.Toolbar.Search = noinherit bold
Token.Toolbar.Search.Text = nobold
Token.Toolbar.System = noinherit bold
Token.Toolbar.Arg = noinherit bold
Token.Toolbar.Arg.Text = nobold
[favorite_queries]
q_param = select * from test where name=?
sh_param = select * from test where id=$1
| [main]
# Multi-line mode allows breaking up the sql statements into multiple lines. If
# this is set to True, then the end of the statements must have a semi-colon.
# If this is set to False then sql statements can't be split into multiple
# lines. End of line (return) is considered as the end of the statement.
multi_line = False
# Destructive warning mode will alert you before executing a sql statement
# that may cause harm to the database such as "drop table", "drop database"
# or "shutdown".
destructive_warning = True
# log_file location.
# In Unix/Linux: ~/.config/litecli/log
# In Windows: %USERPROFILE%\AppData\Local\dbcli\litecli\log
# %USERPROFILE% is typically C:\Users\{username}
log_file = default
# Default log level. Possible values: "CRITICAL", "ERROR", "WARNING", "INFO"
# and "DEBUG". "NONE" disables logging.
log_level = INFO
# Log every query and its results to a file. Enable this by uncommenting the
# line below.
# audit_log = ~/.litecli-audit.log
# Default pager.
# By default '$PAGER' environment variable is used
# pager = less -SRXF
# Table format. Possible values:
# ascii, double, github, psql, plain, simple, grid, fancy_grid, pipe, orgtbl,
# rst, mediawiki, html, latex, latex_booktabs, textile, moinmoin, jira,
# vertical, tsv, csv.
# Recommended: ascii
table_format = ascii
# Syntax coloring style. Possible values (many support the "-dark" suffix):
# manni, igor, xcode, vim, autumn, vs, rrt, native, perldoc, borland, tango, emacs,
# friendly, monokai, paraiso, colorful, murphy, bw, pastie, paraiso, trac, default,
# fruity.
# Screenshots at http://mycli.net/syntax
syntax_style = default
# Keybindings: Possible values: emacs, vi.
# Emacs mode: Ctrl-A is home, Ctrl-E is end. All emacs keybindings are available in the REPL.
# When Vi mode is enabled you can use modal editing features offered by Vi in the REPL.
key_bindings = emacs
# Enabling this option will show the suggestions in a wider menu. Thus more items are suggested.
wider_completion_menu = False
# Autocompletion is on by default. This can be truned off by setting this
# option to False. Pressing tab will still trigger completion.
autocompletion = True
# litecli prompt
# \D - The full current date
# \d - Database name
# \f - File basename of the "main" database
# \m - Minutes of the current time
# \n - Newline
# \P - AM/PM
# \R - The current time, in 24-hour military time (0-23)
# \r - The current time, standard 12-hour time (1-12)
# \s - Seconds of the current time
# \x1b[...m - insert ANSI escape sequence
prompt = "\t :\d> "
prompt_continuation = "-> "
# Show/hide the informational toolbar with function keymap at the footer.
show_bottom_toolbar = True
# Skip intro info on startup and outro info on exit
less_chatty = False
# Use alias from --login-path instead of host name in prompt
login_path_as_host = False
# Cause result sets to be displayed vertically if they are too wide for the current window,
# and using normal tabular format otherwise. (This applies to statements terminated by ; or \G.)
auto_vertical_output = False
# keyword casing preference. Possible values "lower", "upper", "auto"
keyword_casing = auto
# disabled pager on startup
enable_pager = True
[colors]
completion-menu.completion.current = "bg:#ffffff #000000"
completion-menu.completion = "bg:#008888 #ffffff"
completion-menu.meta.completion.current = "bg:#44aaaa #000000"
completion-menu.meta.completion = "bg:#448888 #ffffff"
completion-menu.multi-column-meta = "bg:#aaffff #000000"
scrollbar.arrow = "bg:#003333"
scrollbar = "bg:#00aaaa"
selected = "#ffffff bg:#6666aa"
search = "#ffffff bg:#4444aa"
search.current = "#ffffff bg:#44aa44"
bottom-toolbar = "bg:#222222 #aaaaaa"
bottom-toolbar.off = "bg:#222222 #888888"
bottom-toolbar.on = "bg:#222222 #ffffff"
search-toolbar = noinherit bold
search-toolbar.text = nobold
system-toolbar = noinherit bold
arg-toolbar = noinherit bold
arg-toolbar.text = nobold
bottom-toolbar.transaction.valid = "bg:#222222 #00ff5f bold"
bottom-toolbar.transaction.failed = "bg:#222222 #ff005f bold"
# style classes for colored table output
output.header = "#00ff5f bold"
output.odd-row = ""
output.even-row = ""
Token.Menu.Completions.Completion.Current = "bg:#00aaaa #000000"
Token.Menu.Completions.Completion = "bg:#008888 #ffffff"
Token.Menu.Completions.MultiColumnMeta = "bg:#aaffff #000000"
Token.Menu.Completions.ProgressButton = "bg:#003333"
Token.Menu.Completions.ProgressBar = "bg:#00aaaa"
Token.Output.Header = bold
Token.Output.OddRow = ""
Token.Output.EvenRow = ""
Token.SelectedText = "#ffffff bg:#6666aa"
Token.SearchMatch = "#ffffff bg:#4444aa"
Token.SearchMatch.Current = "#ffffff bg:#44aa44"
Token.Toolbar = "bg:#222222 #aaaaaa"
Token.Toolbar.Off = "bg:#222222 #888888"
Token.Toolbar.On = "bg:#222222 #ffffff"
Token.Toolbar.Search = noinherit bold
Token.Toolbar.Search.Text = nobold
Token.Toolbar.System = noinherit bold
Token.Toolbar.Arg = noinherit bold
Token.Toolbar.Arg.Text = nobold
[favorite_queries]
q_param = select * from test where name=?
sh_param = select * from test where id=$1
# Startup commands
# litecli commands or sqlite commands to be executed on startup.
# some of them will require you to have a database attached.
# they will be executed in the same order as they appear in the list.
[startup_commands]
commands = "create table startupcommands(a text)", "insert into startupcommands values('abc')"
| bjornasm | e5dacd9f0861d1c3a45e8f339ca3a71f5dee2359 | e95c17f435ccc16f57d3f7dba52d546676690e0c | Doesn't this need a `[startup_commands]` section? | amjith | 1 |
dbcli/litecli | 160 | Fixing startupcommands and successful perpetually set as True | ## Description
### Startup commands
As seen in https://github.com/dbcli/litecli/issues/56 there is a wish for having the option to define a set of commands that are to be executed on startup of litecli. For my case this would for instance be `.tables` - as I always seem to forget their names.
Startupcommands are set in liteclirc, I chose this rather than a new rc file to keep things simple.
```
# Startup commands
# litecli commands or sqlite commands to be executed on startup.
# some of them will require you to have a database attached.
# they will be executed in the same order as they appear in the list.
[startup_commands]
#commands = ".tables", "pragma foreign_keys = ON;"
```
As I wanted to keep in line with the rest of the codebase the commands are executed using `sqlexecute.run(command)` in the `startup_commands()` function that loops through all the startup commands at startup. To facilitate for helpful error messaging I have also added `check_if_sqlitedotcommand(command)` that is invoked inside `sqlexecute.run()` to check if the command the user is trying to execute is indeed a valid dotcommand, however it is not implemented yet. I though this would be nice so the users doesn't question whether they had a spelling error in their command in those cases.
With this implementation there may be a wish to be able to utilize more of the dot commands or other special commands from SQLite - should we implement more of these into LiteCLI? Or is it to whish to not expand the range of special commands offered?
### Successfull perpetually set as True
Queries are amongst others logged into `self.query_history` which can also be accessed through `self.get_last_query`. Here they are stored as `"Query", ["query", "successful", "mutating"]`.
`successful` is set [in main.py](https://github.com/dbcli/litecli/blob/main/litecli/main.py#L441) , `successful = False`. This was undoubtedly with the intention to set `successful = True` at successful execution of queries. However I have found that failing queries are set as successful simply because the program will continue to run after the intial ` res = sqlexecute.run(text)` and thus set `successful = True`
Testcase, hardcoded, from [this line](https://github.com/dbcli/litecli/blob/main/litecli/main.py#L441):
successful = False
start = time()
text = "this-is-not-a-query;"
res = sqlexecute.run(text)
self.formatter.query = text
successful = True
My fix is to only set successful = True in the subsequent execution logic, often as an `else:` in the `try/except:` statements.
If the intention was that the definition of successful = True is that a query is handled in any way this pull request can be disregarded, however please consider this interpretation of the logic as that makes it easier to access failed queries.
## Checklist
<!--- We appreciate your help and want to give you credit. Please take a moment to put an `x` in the boxes below as you complete them. -->
- [x] I've added this contribution to the `CHANGELOG.md` file.
Note: One test is failing, this is unrelated to these changes as it fails on the code from the main branch as well, see [issue 153](https://github.com/dbcli/litecli/issues/153) | null | 2023-05-03 21:01:28+00:00 | 2023-05-12 01:02:31+00:00 | tests/liteclirc | [main]
# Multi-line mode allows breaking up the sql statements into multiple lines. If
# this is set to True, then the end of the statements must have a semi-colon.
# If this is set to False then sql statements can't be split into multiple
# lines. End of line (return) is considered as the end of the statement.
multi_line = False
# Destructive warning mode will alert you before executing a sql statement
# that may cause harm to the database such as "drop table", "drop database"
# or "shutdown".
destructive_warning = True
# log_file location.
# In Unix/Linux: ~/.config/litecli/log
# In Windows: %USERPROFILE%\AppData\Local\dbcli\litecli\log
# %USERPROFILE% is typically C:\Users\{username}
log_file = default
# Default log level. Possible values: "CRITICAL", "ERROR", "WARNING", "INFO"
# and "DEBUG". "NONE" disables logging.
log_level = INFO
# Log every query and its results to a file. Enable this by uncommenting the
# line below.
# audit_log = ~/.litecli-audit.log
# Default pager.
# By default '$PAGER' environment variable is used
# pager = less -SRXF
# Table format. Possible values:
# ascii, double, github, psql, plain, simple, grid, fancy_grid, pipe, orgtbl,
# rst, mediawiki, html, latex, latex_booktabs, textile, moinmoin, jira,
# vertical, tsv, csv.
# Recommended: ascii
table_format = ascii
# Syntax coloring style. Possible values (many support the "-dark" suffix):
# manni, igor, xcode, vim, autumn, vs, rrt, native, perldoc, borland, tango, emacs,
# friendly, monokai, paraiso, colorful, murphy, bw, pastie, paraiso, trac, default,
# fruity.
# Screenshots at http://mycli.net/syntax
syntax_style = default
# Keybindings: Possible values: emacs, vi.
# Emacs mode: Ctrl-A is home, Ctrl-E is end. All emacs keybindings are available in the REPL.
# When Vi mode is enabled you can use modal editing features offered by Vi in the REPL.
key_bindings = emacs
# Enabling this option will show the suggestions in a wider menu. Thus more items are suggested.
wider_completion_menu = False
# Autocompletion is on by default. This can be truned off by setting this
# option to False. Pressing tab will still trigger completion.
autocompletion = True
# litecli prompt
# \D - The full current date
# \d - Database name
# \f - File basename of the "main" database
# \m - Minutes of the current time
# \n - Newline
# \P - AM/PM
# \R - The current time, in 24-hour military time (0-23)
# \r - The current time, standard 12-hour time (1-12)
# \s - Seconds of the current time
# \x1b[...m - insert ANSI escape sequence
prompt = "\t :\d> "
prompt_continuation = "-> "
# Show/hide the informational toolbar with function keymap at the footer.
show_bottom_toolbar = True
# Skip intro info on startup and outro info on exit
less_chatty = False
# Use alias from --login-path instead of host name in prompt
login_path_as_host = False
# Cause result sets to be displayed vertically if they are too wide for the current window,
# and using normal tabular format otherwise. (This applies to statements terminated by ; or \G.)
auto_vertical_output = False
# keyword casing preference. Possible values "lower", "upper", "auto"
keyword_casing = auto
# disabled pager on startup
enable_pager = True
[colors]
completion-menu.completion.current = "bg:#ffffff #000000"
completion-menu.completion = "bg:#008888 #ffffff"
completion-menu.meta.completion.current = "bg:#44aaaa #000000"
completion-menu.meta.completion = "bg:#448888 #ffffff"
completion-menu.multi-column-meta = "bg:#aaffff #000000"
scrollbar.arrow = "bg:#003333"
scrollbar = "bg:#00aaaa"
selected = "#ffffff bg:#6666aa"
search = "#ffffff bg:#4444aa"
search.current = "#ffffff bg:#44aa44"
bottom-toolbar = "bg:#222222 #aaaaaa"
bottom-toolbar.off = "bg:#222222 #888888"
bottom-toolbar.on = "bg:#222222 #ffffff"
search-toolbar = noinherit bold
search-toolbar.text = nobold
system-toolbar = noinherit bold
arg-toolbar = noinherit bold
arg-toolbar.text = nobold
bottom-toolbar.transaction.valid = "bg:#222222 #00ff5f bold"
bottom-toolbar.transaction.failed = "bg:#222222 #ff005f bold"
# style classes for colored table output
output.header = "#00ff5f bold"
output.odd-row = ""
output.even-row = ""
Token.Menu.Completions.Completion.Current = "bg:#00aaaa #000000"
Token.Menu.Completions.Completion = "bg:#008888 #ffffff"
Token.Menu.Completions.MultiColumnMeta = "bg:#aaffff #000000"
Token.Menu.Completions.ProgressButton = "bg:#003333"
Token.Menu.Completions.ProgressBar = "bg:#00aaaa"
Token.Output.Header = bold
Token.Output.OddRow = ""
Token.Output.EvenRow = ""
Token.SelectedText = "#ffffff bg:#6666aa"
Token.SearchMatch = "#ffffff bg:#4444aa"
Token.SearchMatch.Current = "#ffffff bg:#44aa44"
Token.Toolbar = "bg:#222222 #aaaaaa"
Token.Toolbar.Off = "bg:#222222 #888888"
Token.Toolbar.On = "bg:#222222 #ffffff"
Token.Toolbar.Search = noinherit bold
Token.Toolbar.Search.Text = nobold
Token.Toolbar.System = noinherit bold
Token.Toolbar.Arg = noinherit bold
Token.Toolbar.Arg.Text = nobold
[favorite_queries]
q_param = select * from test where name=?
sh_param = select * from test where id=$1
| [main]
# Multi-line mode allows breaking up the sql statements into multiple lines. If
# this is set to True, then the end of the statements must have a semi-colon.
# If this is set to False then sql statements can't be split into multiple
# lines. End of line (return) is considered as the end of the statement.
multi_line = False
# Destructive warning mode will alert you before executing a sql statement
# that may cause harm to the database such as "drop table", "drop database"
# or "shutdown".
destructive_warning = True
# log_file location.
# In Unix/Linux: ~/.config/litecli/log
# In Windows: %USERPROFILE%\AppData\Local\dbcli\litecli\log
# %USERPROFILE% is typically C:\Users\{username}
log_file = default
# Default log level. Possible values: "CRITICAL", "ERROR", "WARNING", "INFO"
# and "DEBUG". "NONE" disables logging.
log_level = INFO
# Log every query and its results to a file. Enable this by uncommenting the
# line below.
# audit_log = ~/.litecli-audit.log
# Default pager.
# By default '$PAGER' environment variable is used
# pager = less -SRXF
# Table format. Possible values:
# ascii, double, github, psql, plain, simple, grid, fancy_grid, pipe, orgtbl,
# rst, mediawiki, html, latex, latex_booktabs, textile, moinmoin, jira,
# vertical, tsv, csv.
# Recommended: ascii
table_format = ascii
# Syntax coloring style. Possible values (many support the "-dark" suffix):
# manni, igor, xcode, vim, autumn, vs, rrt, native, perldoc, borland, tango, emacs,
# friendly, monokai, paraiso, colorful, murphy, bw, pastie, paraiso, trac, default,
# fruity.
# Screenshots at http://mycli.net/syntax
syntax_style = default
# Keybindings: Possible values: emacs, vi.
# Emacs mode: Ctrl-A is home, Ctrl-E is end. All emacs keybindings are available in the REPL.
# When Vi mode is enabled you can use modal editing features offered by Vi in the REPL.
key_bindings = emacs
# Enabling this option will show the suggestions in a wider menu. Thus more items are suggested.
wider_completion_menu = False
# Autocompletion is on by default. This can be truned off by setting this
# option to False. Pressing tab will still trigger completion.
autocompletion = True
# litecli prompt
# \D - The full current date
# \d - Database name
# \f - File basename of the "main" database
# \m - Minutes of the current time
# \n - Newline
# \P - AM/PM
# \R - The current time, in 24-hour military time (0-23)
# \r - The current time, standard 12-hour time (1-12)
# \s - Seconds of the current time
# \x1b[...m - insert ANSI escape sequence
prompt = "\t :\d> "
prompt_continuation = "-> "
# Show/hide the informational toolbar with function keymap at the footer.
show_bottom_toolbar = True
# Skip intro info on startup and outro info on exit
less_chatty = False
# Use alias from --login-path instead of host name in prompt
login_path_as_host = False
# Cause result sets to be displayed vertically if they are too wide for the current window,
# and using normal tabular format otherwise. (This applies to statements terminated by ; or \G.)
auto_vertical_output = False
# keyword casing preference. Possible values "lower", "upper", "auto"
keyword_casing = auto
# disabled pager on startup
enable_pager = True
[colors]
completion-menu.completion.current = "bg:#ffffff #000000"
completion-menu.completion = "bg:#008888 #ffffff"
completion-menu.meta.completion.current = "bg:#44aaaa #000000"
completion-menu.meta.completion = "bg:#448888 #ffffff"
completion-menu.multi-column-meta = "bg:#aaffff #000000"
scrollbar.arrow = "bg:#003333"
scrollbar = "bg:#00aaaa"
selected = "#ffffff bg:#6666aa"
search = "#ffffff bg:#4444aa"
search.current = "#ffffff bg:#44aa44"
bottom-toolbar = "bg:#222222 #aaaaaa"
bottom-toolbar.off = "bg:#222222 #888888"
bottom-toolbar.on = "bg:#222222 #ffffff"
search-toolbar = noinherit bold
search-toolbar.text = nobold
system-toolbar = noinherit bold
arg-toolbar = noinherit bold
arg-toolbar.text = nobold
bottom-toolbar.transaction.valid = "bg:#222222 #00ff5f bold"
bottom-toolbar.transaction.failed = "bg:#222222 #ff005f bold"
# style classes for colored table output
output.header = "#00ff5f bold"
output.odd-row = ""
output.even-row = ""
Token.Menu.Completions.Completion.Current = "bg:#00aaaa #000000"
Token.Menu.Completions.Completion = "bg:#008888 #ffffff"
Token.Menu.Completions.MultiColumnMeta = "bg:#aaffff #000000"
Token.Menu.Completions.ProgressButton = "bg:#003333"
Token.Menu.Completions.ProgressBar = "bg:#00aaaa"
Token.Output.Header = bold
Token.Output.OddRow = ""
Token.Output.EvenRow = ""
Token.SelectedText = "#ffffff bg:#6666aa"
Token.SearchMatch = "#ffffff bg:#4444aa"
Token.SearchMatch.Current = "#ffffff bg:#44aa44"
Token.Toolbar = "bg:#222222 #aaaaaa"
Token.Toolbar.Off = "bg:#222222 #888888"
Token.Toolbar.On = "bg:#222222 #ffffff"
Token.Toolbar.Search = noinherit bold
Token.Toolbar.Search.Text = nobold
Token.Toolbar.System = noinherit bold
Token.Toolbar.Arg = noinherit bold
Token.Toolbar.Arg.Text = nobold
[favorite_queries]
q_param = select * from test where name=?
sh_param = select * from test where id=$1
# Startup commands
# litecli commands or sqlite commands to be executed on startup.
# some of them will require you to have a database attached.
# they will be executed in the same order as they appear in the list.
[startup_commands]
commands = "create table startupcommands(a text)", "insert into startupcommands values('abc')"
| bjornasm | e5dacd9f0861d1c3a45e8f339ca3a71f5dee2359 | e95c17f435ccc16f57d3f7dba52d546676690e0c | Oh yes, sorry. Fixed that and added a test to check for the startup commands. Now only miss a test for the execution of the startup commands which I am not sure on how I should do - the check itself is fine but not sure how I invoke the cli without the cli taking over. | bjornasm | 2 |
lucidrains/DALLE-pytorch | 327 | Generate text with DALLE | Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch.
I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text.
To make this work I had to add an "ignore_tokens" argument to the decoder method of each argument otherwise the tokenizers raise an error when trying to decode padding tokens. | null | 2021-06-30 09:36:00+00:00 | 2021-07-08 18:57:49+00:00 | dalle_pytorch/dalle_pytorch.py | from math import log2, sqrt
import torch
from torch import nn, einsum
import torch.nn.functional as F
from axial_positional_embedding import AxialPositionalEmbedding
from einops import rearrange
from dalle_pytorch import distributed_utils
from dalle_pytorch.vae import OpenAIDiscreteVAE, VQGanVAE
from dalle_pytorch.transformer import Transformer, DivideMax
# helpers
def exists(val):
return val is not None
def default(val, d):
return val if exists(val) else d
def always(val):
def inner(*args, **kwargs):
return val
return inner
def is_empty(t):
return t.nelement() == 0
def masked_mean(t, mask, dim = 1):
t = t.masked_fill(~mask[:, :, None], 0.)
return t.sum(dim = 1) / mask.sum(dim = 1)[..., None]
def set_requires_grad(model, value):
for param in model.parameters():
param.requires_grad = value
def eval_decorator(fn):
def inner(model, *args, **kwargs):
was_training = model.training
model.eval()
out = fn(model, *args, **kwargs)
model.train(was_training)
return out
return inner
# sampling helpers
def top_k(logits, thres = 0.5):
num_logits = logits.shape[-1]
k = max(int((1 - thres) * num_logits), 1)
val, ind = torch.topk(logits, k)
probs = torch.full_like(logits, float('-inf'))
probs.scatter_(1, ind, val)
return probs
# discrete vae class
class ResBlock(nn.Module):
def __init__(self, chan):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(chan, chan, 3, padding = 1),
nn.ReLU(),
nn.Conv2d(chan, chan, 3, padding = 1),
nn.ReLU(),
nn.Conv2d(chan, chan, 1)
)
def forward(self, x):
return self.net(x) + x
class DiscreteVAE(nn.Module):
def __init__(
self,
image_size = 256,
num_tokens = 512,
codebook_dim = 512,
num_layers = 3,
num_resnet_blocks = 0,
hidden_dim = 64,
channels = 3,
smooth_l1_loss = False,
temperature = 0.9,
straight_through = False,
kl_div_loss_weight = 0.,
normalization = ((0.5,) * 3, (0.5,) * 3)
):
super().__init__()
assert log2(image_size).is_integer(), 'image size must be a power of 2'
assert num_layers >= 1, 'number of layers must be greater than or equal to 1'
has_resblocks = num_resnet_blocks > 0
self.image_size = image_size
self.num_tokens = num_tokens
self.num_layers = num_layers
self.temperature = temperature
self.straight_through = straight_through
self.codebook = nn.Embedding(num_tokens, codebook_dim)
hdim = hidden_dim
enc_chans = [hidden_dim] * num_layers
dec_chans = list(reversed(enc_chans))
enc_chans = [channels, *enc_chans]
dec_init_chan = codebook_dim if not has_resblocks else dec_chans[0]
dec_chans = [dec_init_chan, *dec_chans]
enc_chans_io, dec_chans_io = map(lambda t: list(zip(t[:-1], t[1:])), (enc_chans, dec_chans))
enc_layers = []
dec_layers = []
for (enc_in, enc_out), (dec_in, dec_out) in zip(enc_chans_io, dec_chans_io):
enc_layers.append(nn.Sequential(nn.Conv2d(enc_in, enc_out, 4, stride = 2, padding = 1), nn.ReLU()))
dec_layers.append(nn.Sequential(nn.ConvTranspose2d(dec_in, dec_out, 4, stride = 2, padding = 1), nn.ReLU()))
for _ in range(num_resnet_blocks):
dec_layers.insert(0, ResBlock(dec_chans[1]))
enc_layers.append(ResBlock(enc_chans[-1]))
if num_resnet_blocks > 0:
dec_layers.insert(0, nn.Conv2d(codebook_dim, dec_chans[1], 1))
enc_layers.append(nn.Conv2d(enc_chans[-1], num_tokens, 1))
dec_layers.append(nn.Conv2d(dec_chans[-1], channels, 1))
self.encoder = nn.Sequential(*enc_layers)
self.decoder = nn.Sequential(*dec_layers)
self.loss_fn = F.smooth_l1_loss if smooth_l1_loss else F.mse_loss
self.kl_div_loss_weight = kl_div_loss_weight
# take care of normalization within class
self.normalization = normalization
self._register_external_parameters()
def _register_external_parameters(self):
"""Register external parameters for DeepSpeed partitioning."""
if (
not distributed_utils.is_distributed
or not distributed_utils.using_backend(
distributed_utils.DeepSpeedBackend)
):
return
deepspeed = distributed_utils.backend.backend_module
deepspeed.zero.register_external_parameter(self, self.codebook.weight)
def norm(self, images):
if not exists(self.normalization):
return images
means, stds = map(lambda t: torch.as_tensor(t).to(images), self.normalization)
means, stds = map(lambda t: rearrange(t, 'c -> () c () ()'), (means, stds))
images = images.clone()
images.sub_(means).div_(stds)
return images
@torch.no_grad()
@eval_decorator
def get_codebook_indices(self, images):
logits = self(images, return_logits = True)
codebook_indices = logits.argmax(dim = 1).flatten(1)
return codebook_indices
def decode(
self,
img_seq
):
image_embeds = self.codebook(img_seq)
b, n, d = image_embeds.shape
h = w = int(sqrt(n))
image_embeds = rearrange(image_embeds, 'b (h w) d -> b d h w', h = h, w = w)
images = self.decoder(image_embeds)
return images
def forward(
self,
img,
return_loss = False,
return_recons = False,
return_logits = False,
temp = None
):
device, num_tokens, image_size, kl_div_loss_weight = img.device, self.num_tokens, self.image_size, self.kl_div_loss_weight
assert img.shape[-1] == image_size and img.shape[-2] == image_size, f'input must have the correct image size {image_size}'
img = self.norm(img)
logits = self.encoder(img)
if return_logits:
return logits # return logits for getting hard image indices for DALL-E training
temp = default(temp, self.temperature)
soft_one_hot = F.gumbel_softmax(logits, tau = temp, dim = 1, hard = self.straight_through)
sampled = einsum('b n h w, n d -> b d h w', soft_one_hot, self.codebook.weight)
out = self.decoder(sampled)
if not return_loss:
return out
# reconstruction loss
recon_loss = self.loss_fn(img, out)
# kl divergence
logits = rearrange(logits, 'b n h w -> b (h w) n')
log_qy = F.log_softmax(logits, dim = -1)
log_uniform = torch.log(torch.tensor([1. / num_tokens], device = device))
kl_div = F.kl_div(log_uniform, log_qy, None, None, 'batchmean', log_target = True)
loss = recon_loss + (kl_div * kl_div_loss_weight)
if not return_recons:
return loss
return loss, out
# main classes
class CLIP(nn.Module):
def __init__(
self,
*,
dim_text = 512,
dim_image = 512,
dim_latent = 512,
num_text_tokens = 10000,
text_enc_depth = 6,
text_seq_len = 256,
text_heads = 8,
num_visual_tokens = 512,
visual_enc_depth = 6,
visual_heads = 8,
visual_image_size = 256,
visual_patch_size = 32,
channels = 3
):
super().__init__()
self.text_emb = nn.Embedding(num_text_tokens, dim_text)
self.text_pos_emb = nn.Embedding(text_seq_len, dim_text)
self.text_transformer = Transformer(causal = False, seq_len = text_seq_len, dim = dim_text, depth = text_enc_depth, heads = text_heads)
self.to_text_latent = nn.Linear(dim_text, dim_latent, bias = False)
assert visual_image_size % visual_patch_size == 0, 'Image dimensions must be divisible by the patch size.'
num_patches = (visual_image_size // visual_patch_size) ** 2
patch_dim = channels * visual_patch_size ** 2
self.visual_patch_size = visual_patch_size
self.to_visual_embedding = nn.Linear(patch_dim, dim_image)
self.visual_pos_emb = nn.Embedding(num_patches, dim_image)
self.visual_transformer = Transformer(causal = False, seq_len = num_patches, dim = dim_image, depth = visual_enc_depth, heads = visual_heads)
self.to_visual_latent = nn.Linear(dim_image, dim_latent, bias = False)
self.temperature = nn.Parameter(torch.tensor(1.))
def forward(
self,
text,
image,
text_mask = None,
return_loss = False
):
b, device, p = text.shape[0], text.device, self.visual_patch_size
text_emb = self.text_emb(text)
text_emb += self.text_pos_emb(torch.arange(text.shape[1], device = device))
image_patches = rearrange(image, 'b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1 = p, p2 = p)
image_emb = self.to_visual_embedding(image_patches)
image_emb += self.visual_pos_emb(torch.arange(image_emb.shape[1], device = device))
enc_text = self.text_transformer(text_emb, mask = text_mask)
enc_image = self.visual_transformer(image_emb)
if exists(text_mask):
text_latents = masked_mean(enc_text, text_mask, dim = 1)
else:
text_latents = enc_text.mean(dim = 1)
image_latents = enc_image.mean(dim = 1)
text_latents = self.to_text_latent(text_latents)
image_latents = self.to_visual_latent(image_latents)
text_latents, image_latents = map(lambda t: F.normalize(t, p = 2, dim = -1), (text_latents, image_latents))
temp = self.temperature.exp()
if not return_loss:
sim = einsum('n d, n d -> n', text_latents, image_latents) * temp
return sim
sim = einsum('i d, j d -> i j', text_latents, image_latents) * temp
labels = torch.arange(b, device = device)
loss = (F.cross_entropy(sim, labels) + F.cross_entropy(sim.t(), labels)) / 2
return loss
# main DALL-E class
class DALLE(nn.Module):
def __init__(
self,
*,
dim,
vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth,
heads = 8,
dim_head = 64,
reversible = False,
attn_dropout = 0.,
ff_dropout = 0,
sparse_attn = False,
attn_types = None,
loss_img_weight = 7,
stable = False
):
super().__init__()
assert isinstance(vae, (DiscreteVAE, OpenAIDiscreteVAE, VQGanVAE)), 'vae must be an instance of DiscreteVAE'
image_size = vae.image_size
num_image_tokens = vae.num_tokens
image_fmap_size = (vae.image_size // (2 ** vae.num_layers))
image_seq_len = image_fmap_size ** 2
num_text_tokens = num_text_tokens + text_seq_len # reserve unique padding tokens for each position (text seq len)
self.text_emb = nn.Embedding(num_text_tokens, dim)
self.image_emb = nn.Embedding(num_image_tokens, dim)
self.text_pos_emb = nn.Embedding(text_seq_len + 1, dim) # +1 for <bos>
self.image_pos_emb = AxialPositionalEmbedding(dim, axial_shape = (image_fmap_size, image_fmap_size))
self.num_text_tokens = num_text_tokens # for offsetting logits index and calculating cross entropy loss
self.num_image_tokens = num_image_tokens
self.text_seq_len = text_seq_len
self.image_seq_len = image_seq_len
seq_len = text_seq_len + image_seq_len
total_tokens = num_text_tokens + num_image_tokens
self.total_tokens = total_tokens
self.total_seq_len = seq_len
self.vae = vae
set_requires_grad(self.vae, False) # freeze VAE from being trained
self.transformer = Transformer(
dim = dim,
causal = True,
seq_len = seq_len,
depth = depth,
heads = heads,
dim_head = dim_head,
reversible = reversible,
attn_dropout = attn_dropout,
ff_dropout = ff_dropout,
attn_types = attn_types,
image_fmap_size = image_fmap_size,
sparse_attn = sparse_attn,
stable = stable
)
self.stable = stable
if stable:
self.norm_by_max = DivideMax(dim = -1)
self.to_logits = nn.Sequential(
nn.LayerNorm(dim),
nn.Linear(dim, self.total_tokens),
)
seq_range = torch.arange(seq_len)
logits_range = torch.arange(total_tokens)
seq_range = rearrange(seq_range, 'n -> () n ()')
logits_range = rearrange(logits_range, 'd -> () () d')
logits_mask = (
((seq_range >= text_seq_len) & (logits_range < num_text_tokens)) |
((seq_range < text_seq_len) & (logits_range >= num_text_tokens))
)
self.register_buffer('logits_mask', logits_mask, persistent=False)
self.loss_img_weight = loss_img_weight
@torch.no_grad()
@eval_decorator
def generate_images(
self,
text,
*,
clip = None,
mask = None,
filter_thres = 0.5,
temperature = 1.,
img = None,
num_init_img_tokens = None
):
vae, text_seq_len, image_seq_len, num_text_tokens = self.vae, self.text_seq_len, self.image_seq_len, self.num_text_tokens
total_len = text_seq_len + image_seq_len
text = text[:, :text_seq_len] # make sure text is within bounds
out = text
if exists(img):
image_size = vae.image_size
assert img.shape[1] == 3 and img.shape[2] == image_size and img.shape[3] == image_size, f'input image must have the correct image size {image_size}'
indices = vae.get_codebook_indices(img)
num_img_tokens = default(num_init_img_tokens, int(0.4375 * image_seq_len)) # OpenAI used 14 * 32 initial tokens to prime
assert num_img_tokens < image_seq_len, 'number of initial image tokens for priming must be less than the total image token sequence length'
indices = indices[:, :num_img_tokens]
out = torch.cat((out, indices), dim = -1)
for cur_len in range(out.shape[1], total_len):
is_image = cur_len >= text_seq_len
text, image = out[:, :text_seq_len], out[:, text_seq_len:]
logits = self(text, image, mask = mask)[:, -1, :]
filtered_logits = top_k(logits, thres = filter_thres)
probs = F.softmax(filtered_logits / temperature, dim = -1)
sample = torch.multinomial(probs, 1)
sample -= (num_text_tokens if is_image else 0) # offset sampled token if it is an image token, since logit space is composed of text and then image tokens
out = torch.cat((out, sample), dim=-1)
if out.shape[1] <= text_seq_len:
mask = F.pad(mask, (0, 1), value = True)
text_seq = out[:, :text_seq_len]
img_seq = out[:, -image_seq_len:]
images = vae.decode(img_seq)
if exists(clip):
scores = clip(text_seq, images, return_loss = False)
return images, scores
return images
def forward(
self,
text,
image = None,
mask = None,
return_loss = False
):
assert text.shape[-1] == self.text_seq_len, f'the length {text.shape[-1]} of the text tokens you passed in does not have the correct length ({self.text_seq_len})'
device, total_seq_len = text.device, self.total_seq_len
# make sure padding in text tokens get unique padding token id
text_range = torch.arange(self.text_seq_len, device = device) + (self.num_text_tokens - self.text_seq_len)
text = torch.where(text == 0, text_range, text)
# add <bos>
text = F.pad(text, (1, 0), value = 0)
tokens = self.text_emb(text)
tokens += self.text_pos_emb(torch.arange(text.shape[1], device = device))
seq_len = tokens.shape[1]
if exists(image) and not is_empty(image):
is_raw_image = len(image.shape) == 4
if is_raw_image:
image_size = self.vae.image_size
assert tuple(image.shape[1:]) == (3, image_size, image_size), f'invalid image of dimensions {image.shape} passed in during training'
image = self.vae.get_codebook_indices(image)
image_len = image.shape[1]
image_emb = self.image_emb(image)
image_emb += self.image_pos_emb(image_emb)
tokens = torch.cat((tokens, image_emb), dim = 1)
seq_len += image_len
# when training, if the length exceeds the total text + image length
# remove the last token, since it needs not to be trained
if tokens.shape[1] > total_seq_len:
seq_len -= 1
tokens = tokens[:, :-1]
out = self.transformer(tokens)
if self.stable:
out = self.norm_by_max(out)
logits = self.to_logits(out)
# mask logits to make sure text predicts text (except last token), and image predicts image
logits_mask = self.logits_mask[:, :seq_len]
max_neg_value = -torch.finfo(logits.dtype).max
logits.masked_fill_(logits_mask, max_neg_value)
if not return_loss:
return logits
assert exists(image), 'when training, image must be supplied'
offsetted_image = image + self.num_text_tokens
labels = torch.cat((text[:, 1:], offsetted_image), dim = 1)
logits = rearrange(logits, 'b n c -> b c n')
loss_text = F.cross_entropy(logits[:, :, :self.text_seq_len], labels[:, :self.text_seq_len])
loss_img = F.cross_entropy(logits[:, :, self.text_seq_len:], labels[:, self.text_seq_len:])
loss = (loss_text + self.loss_img_weight * loss_img) / (self.loss_img_weight + 1)
return loss
| from math import log2, sqrt
import torch
from torch import nn, einsum
import torch.nn.functional as F
import numpy as np
from axial_positional_embedding import AxialPositionalEmbedding
from einops import rearrange
from dalle_pytorch import distributed_utils, tokenizer
from dalle_pytorch.vae import OpenAIDiscreteVAE, VQGanVAE
from dalle_pytorch.transformer import Transformer, DivideMax
# helpers
def exists(val):
return val is not None
def default(val, d):
return val if exists(val) else d
def always(val):
def inner(*args, **kwargs):
return val
return inner
def is_empty(t):
return t.nelement() == 0
def masked_mean(t, mask, dim = 1):
t = t.masked_fill(~mask[:, :, None], 0.)
return t.sum(dim = 1) / mask.sum(dim = 1)[..., None]
def set_requires_grad(model, value):
for param in model.parameters():
param.requires_grad = value
def eval_decorator(fn):
def inner(model, *args, **kwargs):
was_training = model.training
model.eval()
out = fn(model, *args, **kwargs)
model.train(was_training)
return out
return inner
# sampling helpers
def top_k(logits, thres = 0.5):
num_logits = logits.shape[-1]
k = max(int((1 - thres) * num_logits), 1)
val, ind = torch.topk(logits, k)
probs = torch.full_like(logits, float('-inf'))
probs.scatter_(1, ind, val)
return probs
# discrete vae class
class ResBlock(nn.Module):
def __init__(self, chan):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(chan, chan, 3, padding = 1),
nn.ReLU(),
nn.Conv2d(chan, chan, 3, padding = 1),
nn.ReLU(),
nn.Conv2d(chan, chan, 1)
)
def forward(self, x):
return self.net(x) + x
class DiscreteVAE(nn.Module):
def __init__(
self,
image_size = 256,
num_tokens = 512,
codebook_dim = 512,
num_layers = 3,
num_resnet_blocks = 0,
hidden_dim = 64,
channels = 3,
smooth_l1_loss = False,
temperature = 0.9,
straight_through = False,
kl_div_loss_weight = 0.,
normalization = ((0.5,) * 3, (0.5,) * 3)
):
super().__init__()
assert log2(image_size).is_integer(), 'image size must be a power of 2'
assert num_layers >= 1, 'number of layers must be greater than or equal to 1'
has_resblocks = num_resnet_blocks > 0
self.image_size = image_size
self.num_tokens = num_tokens
self.num_layers = num_layers
self.temperature = temperature
self.straight_through = straight_through
self.codebook = nn.Embedding(num_tokens, codebook_dim)
hdim = hidden_dim
enc_chans = [hidden_dim] * num_layers
dec_chans = list(reversed(enc_chans))
enc_chans = [channels, *enc_chans]
dec_init_chan = codebook_dim if not has_resblocks else dec_chans[0]
dec_chans = [dec_init_chan, *dec_chans]
enc_chans_io, dec_chans_io = map(lambda t: list(zip(t[:-1], t[1:])), (enc_chans, dec_chans))
enc_layers = []
dec_layers = []
for (enc_in, enc_out), (dec_in, dec_out) in zip(enc_chans_io, dec_chans_io):
enc_layers.append(nn.Sequential(nn.Conv2d(enc_in, enc_out, 4, stride = 2, padding = 1), nn.ReLU()))
dec_layers.append(nn.Sequential(nn.ConvTranspose2d(dec_in, dec_out, 4, stride = 2, padding = 1), nn.ReLU()))
for _ in range(num_resnet_blocks):
dec_layers.insert(0, ResBlock(dec_chans[1]))
enc_layers.append(ResBlock(enc_chans[-1]))
if num_resnet_blocks > 0:
dec_layers.insert(0, nn.Conv2d(codebook_dim, dec_chans[1], 1))
enc_layers.append(nn.Conv2d(enc_chans[-1], num_tokens, 1))
dec_layers.append(nn.Conv2d(dec_chans[-1], channels, 1))
self.encoder = nn.Sequential(*enc_layers)
self.decoder = nn.Sequential(*dec_layers)
self.loss_fn = F.smooth_l1_loss if smooth_l1_loss else F.mse_loss
self.kl_div_loss_weight = kl_div_loss_weight
# take care of normalization within class
self.normalization = normalization
self._register_external_parameters()
def _register_external_parameters(self):
"""Register external parameters for DeepSpeed partitioning."""
if (
not distributed_utils.is_distributed
or not distributed_utils.using_backend(
distributed_utils.DeepSpeedBackend)
):
return
deepspeed = distributed_utils.backend.backend_module
deepspeed.zero.register_external_parameter(self, self.codebook.weight)
def norm(self, images):
if not exists(self.normalization):
return images
means, stds = map(lambda t: torch.as_tensor(t).to(images), self.normalization)
means, stds = map(lambda t: rearrange(t, 'c -> () c () ()'), (means, stds))
images = images.clone()
images.sub_(means).div_(stds)
return images
@torch.no_grad()
@eval_decorator
def get_codebook_indices(self, images):
logits = self(images, return_logits = True)
codebook_indices = logits.argmax(dim = 1).flatten(1)
return codebook_indices
def decode(
self,
img_seq
):
image_embeds = self.codebook(img_seq)
b, n, d = image_embeds.shape
h = w = int(sqrt(n))
image_embeds = rearrange(image_embeds, 'b (h w) d -> b d h w', h = h, w = w)
images = self.decoder(image_embeds)
return images
def forward(
self,
img,
return_loss = False,
return_recons = False,
return_logits = False,
temp = None
):
device, num_tokens, image_size, kl_div_loss_weight = img.device, self.num_tokens, self.image_size, self.kl_div_loss_weight
assert img.shape[-1] == image_size and img.shape[-2] == image_size, f'input must have the correct image size {image_size}'
img = self.norm(img)
logits = self.encoder(img)
if return_logits:
return logits # return logits for getting hard image indices for DALL-E training
temp = default(temp, self.temperature)
soft_one_hot = F.gumbel_softmax(logits, tau = temp, dim = 1, hard = self.straight_through)
sampled = einsum('b n h w, n d -> b d h w', soft_one_hot, self.codebook.weight)
out = self.decoder(sampled)
if not return_loss:
return out
# reconstruction loss
recon_loss = self.loss_fn(img, out)
# kl divergence
logits = rearrange(logits, 'b n h w -> b (h w) n')
log_qy = F.log_softmax(logits, dim = -1)
log_uniform = torch.log(torch.tensor([1. / num_tokens], device = device))
kl_div = F.kl_div(log_uniform, log_qy, None, None, 'batchmean', log_target = True)
loss = recon_loss + (kl_div * kl_div_loss_weight)
if not return_recons:
return loss
return loss, out
# main classes
class CLIP(nn.Module):
def __init__(
self,
*,
dim_text = 512,
dim_image = 512,
dim_latent = 512,
num_text_tokens = 10000,
text_enc_depth = 6,
text_seq_len = 256,
text_heads = 8,
num_visual_tokens = 512,
visual_enc_depth = 6,
visual_heads = 8,
visual_image_size = 256,
visual_patch_size = 32,
channels = 3
):
super().__init__()
self.text_emb = nn.Embedding(num_text_tokens, dim_text)
self.text_pos_emb = nn.Embedding(text_seq_len, dim_text)
self.text_transformer = Transformer(causal = False, seq_len = text_seq_len, dim = dim_text, depth = text_enc_depth, heads = text_heads)
self.to_text_latent = nn.Linear(dim_text, dim_latent, bias = False)
assert visual_image_size % visual_patch_size == 0, 'Image dimensions must be divisible by the patch size.'
num_patches = (visual_image_size // visual_patch_size) ** 2
patch_dim = channels * visual_patch_size ** 2
self.visual_patch_size = visual_patch_size
self.to_visual_embedding = nn.Linear(patch_dim, dim_image)
self.visual_pos_emb = nn.Embedding(num_patches, dim_image)
self.visual_transformer = Transformer(causal = False, seq_len = num_patches, dim = dim_image, depth = visual_enc_depth, heads = visual_heads)
self.to_visual_latent = nn.Linear(dim_image, dim_latent, bias = False)
self.temperature = nn.Parameter(torch.tensor(1.))
def forward(
self,
text,
image,
text_mask = None,
return_loss = False
):
b, device, p = text.shape[0], text.device, self.visual_patch_size
text_emb = self.text_emb(text)
text_emb += self.text_pos_emb(torch.arange(text.shape[1], device = device))
image_patches = rearrange(image, 'b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1 = p, p2 = p)
image_emb = self.to_visual_embedding(image_patches)
image_emb += self.visual_pos_emb(torch.arange(image_emb.shape[1], device = device))
enc_text = self.text_transformer(text_emb, mask = text_mask)
enc_image = self.visual_transformer(image_emb)
if exists(text_mask):
text_latents = masked_mean(enc_text, text_mask, dim = 1)
else:
text_latents = enc_text.mean(dim = 1)
image_latents = enc_image.mean(dim = 1)
text_latents = self.to_text_latent(text_latents)
image_latents = self.to_visual_latent(image_latents)
text_latents, image_latents = map(lambda t: F.normalize(t, p = 2, dim = -1), (text_latents, image_latents))
temp = self.temperature.exp()
if not return_loss:
sim = einsum('n d, n d -> n', text_latents, image_latents) * temp
return sim
sim = einsum('i d, j d -> i j', text_latents, image_latents) * temp
labels = torch.arange(b, device = device)
loss = (F.cross_entropy(sim, labels) + F.cross_entropy(sim.t(), labels)) / 2
return loss
# main DALL-E class
class DALLE(nn.Module):
def __init__(
self,
*,
dim,
vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth,
heads = 8,
dim_head = 64,
reversible = False,
attn_dropout = 0.,
ff_dropout = 0,
sparse_attn = False,
attn_types = None,
loss_img_weight = 7,
stable = False
):
super().__init__()
assert isinstance(vae, (DiscreteVAE, OpenAIDiscreteVAE, VQGanVAE)), 'vae must be an instance of DiscreteVAE'
image_size = vae.image_size
num_image_tokens = vae.num_tokens
image_fmap_size = (vae.image_size // (2 ** vae.num_layers))
image_seq_len = image_fmap_size ** 2
num_text_tokens = num_text_tokens + text_seq_len # reserve unique padding tokens for each position (text seq len)
self.text_emb = nn.Embedding(num_text_tokens, dim)
self.image_emb = nn.Embedding(num_image_tokens, dim)
self.text_pos_emb = nn.Embedding(text_seq_len + 1, dim) # +1 for <bos>
self.image_pos_emb = AxialPositionalEmbedding(dim, axial_shape = (image_fmap_size, image_fmap_size))
self.num_text_tokens = num_text_tokens # for offsetting logits index and calculating cross entropy loss
self.num_image_tokens = num_image_tokens
self.text_seq_len = text_seq_len
self.image_seq_len = image_seq_len
seq_len = text_seq_len + image_seq_len
total_tokens = num_text_tokens + num_image_tokens
self.total_tokens = total_tokens
self.total_seq_len = seq_len
self.vae = vae
set_requires_grad(self.vae, False) # freeze VAE from being trained
self.transformer = Transformer(
dim = dim,
causal = True,
seq_len = seq_len,
depth = depth,
heads = heads,
dim_head = dim_head,
reversible = reversible,
attn_dropout = attn_dropout,
ff_dropout = ff_dropout,
attn_types = attn_types,
image_fmap_size = image_fmap_size,
sparse_attn = sparse_attn,
stable = stable
)
self.stable = stable
if stable:
self.norm_by_max = DivideMax(dim = -1)
self.to_logits = nn.Sequential(
nn.LayerNorm(dim),
nn.Linear(dim, self.total_tokens),
)
seq_range = torch.arange(seq_len)
logits_range = torch.arange(total_tokens)
seq_range = rearrange(seq_range, 'n -> () n ()')
logits_range = rearrange(logits_range, 'd -> () () d')
logits_mask = (
((seq_range >= text_seq_len) & (logits_range < num_text_tokens)) |
((seq_range < text_seq_len) & (logits_range >= num_text_tokens))
)
self.register_buffer('logits_mask', logits_mask, persistent=False)
self.loss_img_weight = loss_img_weight
@torch.no_grad()
@eval_decorator
def generate_texts(
self,
text=None,
*,
filter_thres = 0.5,
temperature = 1.
):
text_seq_len = self.text_seq_len
if text is None or text == "":
text_tokens = torch.tensor([[0]]).cuda()
else:
text_tokens = torch.tensor(tokenizer.tokenizer.encode(text)).cuda().unsqueeze(0)
for _ in range(text_tokens.shape[1], text_seq_len):
device = text_tokens.device
tokens = self.text_emb(text_tokens)
tokens += self.text_pos_emb(torch.arange(text_tokens.shape[1], device = device))
seq_len = tokens.shape[1]
output_transf = self.transformer(tokens)
if self.stable:
output_transf = self.norm_by_max(output_transf)
logits = self.to_logits(output_transf)
# mask logits to make sure text predicts text (except last token), and image predicts image
logits_mask = self.logits_mask[:, :seq_len]
max_neg_value = -torch.finfo(logits.dtype).max
logits.masked_fill_(logits_mask, max_neg_value)
logits = logits[:, -1, :]
filtered_logits = top_k(logits, thres = filter_thres)
probs = F.softmax(filtered_logits / temperature, dim = -1)
sample = torch.multinomial(probs, 1)
text_tokens = torch.cat((text_tokens, sample), dim=-1)
padding_tokens = set(np.arange(self.text_seq_len) + (self.num_text_tokens - self.text_seq_len))
texts = [tokenizer.tokenizer.decode(text_token, pad_tokens=padding_tokens) for text_token in text_tokens]
return text_tokens, texts
@torch.no_grad()
@eval_decorator
def generate_images(
self,
text,
*,
clip = None,
mask = None,
filter_thres = 0.5,
temperature = 1.,
img = None,
num_init_img_tokens = None
):
vae, text_seq_len, image_seq_len, num_text_tokens = self.vae, self.text_seq_len, self.image_seq_len, self.num_text_tokens
total_len = text_seq_len + image_seq_len
text = text[:, :text_seq_len] # make sure text is within bounds
out = text
if exists(img):
image_size = vae.image_size
assert img.shape[1] == 3 and img.shape[2] == image_size and img.shape[3] == image_size, f'input image must have the correct image size {image_size}'
indices = vae.get_codebook_indices(img)
num_img_tokens = default(num_init_img_tokens, int(0.4375 * image_seq_len)) # OpenAI used 14 * 32 initial tokens to prime
assert num_img_tokens < image_seq_len, 'number of initial image tokens for priming must be less than the total image token sequence length'
indices = indices[:, :num_img_tokens]
out = torch.cat((out, indices), dim = -1)
for cur_len in range(out.shape[1], total_len):
is_image = cur_len >= text_seq_len
text, image = out[:, :text_seq_len], out[:, text_seq_len:]
logits = self(text, image, mask = mask)[:, -1, :]
filtered_logits = top_k(logits, thres = filter_thres)
probs = F.softmax(filtered_logits / temperature, dim = -1)
sample = torch.multinomial(probs, 1)
sample -= (num_text_tokens if is_image else 0) # offset sampled token if it is an image token, since logit space is composed of text and then image tokens
out = torch.cat((out, sample), dim=-1)
if out.shape[1] <= text_seq_len:
mask = F.pad(mask, (0, 1), value = True)
text_seq = out[:, :text_seq_len]
img_seq = out[:, -image_seq_len:]
images = vae.decode(img_seq)
if exists(clip):
scores = clip(text_seq, images, return_loss = False)
return images, scores
return images
def forward(
self,
text,
image = None,
mask = None,
return_loss = False
):
assert text.shape[-1] == self.text_seq_len, f'the length {text.shape[-1]} of the text tokens you passed in does not have the correct length ({self.text_seq_len})'
device, total_seq_len = text.device, self.total_seq_len
# make sure padding in text tokens get unique padding token id
text_range = torch.arange(self.text_seq_len, device = device) + (self.num_text_tokens - self.text_seq_len)
text = torch.where(text == 0, text_range, text)
# add <bos>
text = F.pad(text, (1, 0), value = 0)
tokens = self.text_emb(text)
tokens += self.text_pos_emb(torch.arange(text.shape[1], device = device))
seq_len = tokens.shape[1]
if exists(image) and not is_empty(image):
is_raw_image = len(image.shape) == 4
if is_raw_image:
image_size = self.vae.image_size
assert tuple(image.shape[1:]) == (3, image_size, image_size), f'invalid image of dimensions {image.shape} passed in during training'
image = self.vae.get_codebook_indices(image)
image_len = image.shape[1]
image_emb = self.image_emb(image)
image_emb += self.image_pos_emb(image_emb)
tokens = torch.cat((tokens, image_emb), dim = 1)
seq_len += image_len
# when training, if the length exceeds the total text + image length
# remove the last token, since it needs not to be trained
if tokens.shape[1] > total_seq_len:
seq_len -= 1
tokens = tokens[:, :-1]
out = self.transformer(tokens)
if self.stable:
out = self.norm_by_max(out)
logits = self.to_logits(out)
# mask logits to make sure text predicts text (except last token), and image predicts image
logits_mask = self.logits_mask[:, :seq_len]
max_neg_value = -torch.finfo(logits.dtype).max
logits.masked_fill_(logits_mask, max_neg_value)
if not return_loss:
return logits
assert exists(image), 'when training, image must be supplied'
offsetted_image = image + self.num_text_tokens
labels = torch.cat((text[:, 1:], offsetted_image), dim = 1)
logits = rearrange(logits, 'b n c -> b c n')
loss_text = F.cross_entropy(logits[:, :, :self.text_seq_len], labels[:, :self.text_seq_len])
loss_img = F.cross_entropy(logits[:, :, self.text_seq_len:], labels[:, self.text_seq_len:])
loss = (loss_text + self.loss_img_weight * loss_img) / (self.loss_img_weight + 1)
return loss
| jules-samaran | 01e402e4001d8075004c85b07b12429b8a01e822 | fd931e16925bc1844277be83b96c19d13ab6f196 | being able to provide a text here could be interesting.
Instead of starting from scratch, that could make it possible to complete an initial text | rom1504 | 0 |
lucidrains/DALLE-pytorch | 327 | Generate text with DALLE | Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch.
I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text.
To make this work I had to add an "ignore_tokens" argument to the decoder method of each argument otherwise the tokenizers raise an error when trying to decode padding tokens. | null | 2021-06-30 09:36:00+00:00 | 2021-07-08 18:57:49+00:00 | dalle_pytorch/tokenizer.py | # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = [0])
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens if token not in pad_tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = pad_tokens.union({0}))
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| jules-samaran | 01e402e4001d8075004c85b07b12429b8a01e822 | fd931e16925bc1844277be83b96c19d13ab6f196 | the variable used below seems to be pad_tokens not ignore_pad_tokens | rom1504 | 1 |
lucidrains/DALLE-pytorch | 327 | Generate text with DALLE | Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch.
I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text.
To make this work I had to add an "ignore_tokens" argument to the decoder method of each argument otherwise the tokenizers raise an error when trying to decode padding tokens. | null | 2021-06-30 09:36:00+00:00 | 2021-07-08 18:57:49+00:00 | dalle_pytorch/tokenizer.py | # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = [0])
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens if token not in pad_tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = pad_tokens.union({0}))
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| jules-samaran | 01e402e4001d8075004c85b07b12429b8a01e822 | fd931e16925bc1844277be83b96c19d13ab6f196 | what do you think about making this a set instead of a list ? the `in` operator of a set is O(1) instead of O(n) for a list. It will be a little bit faster | rom1504 | 2 |
lucidrains/DALLE-pytorch | 327 | Generate text with DALLE | Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch.
I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text.
To make this work I had to add an "ignore_tokens" argument to the decoder method of each argument otherwise the tokenizers raise an error when trying to decode padding tokens. | null | 2021-06-30 09:36:00+00:00 | 2021-07-08 18:57:49+00:00 | dalle_pytorch/tokenizer.py | # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = [0])
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens if token not in pad_tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = pad_tokens.union({0}))
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| jules-samaran | 01e402e4001d8075004c85b07b12429b8a01e822 | fd931e16925bc1844277be83b96c19d13ab6f196 | Nice catch | jules-samaran | 3 |
lucidrains/DALLE-pytorch | 327 | Generate text with DALLE | Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch.
I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text.
To make this work I had to add an "ignore_tokens" argument to the decoder method of each argument otherwise the tokenizers raise an error when trying to decode padding tokens. | null | 2021-06-30 09:36:00+00:00 | 2021-07-08 18:57:49+00:00 | dalle_pytorch/tokenizer.py | # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
tokens = [token for token in tokens if token not in (0,)]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = [0])
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| # take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
# to give users a quick easy start to training DALL-E without doing BPE
import torch
import youtokentome as yttm
from tokenizers import Tokenizer
from tokenizers.processors import ByteLevel
from transformers import BertTokenizer
import html
import os
from functools import lru_cache
from pathlib import Path
import ftfy
import regex as re
# OpenAI simple tokenizer
@lru_cache()
def default_bpe():
return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data/bpe_simple_vocab_16e6.txt")
@lru_cache()
def bytes_to_unicode():
bs = list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
cs = bs[:]
n = 0
for b in range(2 ** 8):
if b not in bs:
bs.append(b)
cs.append(2 ** 8 + n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
def basic_clean(text):
text = ftfy.fix_text(text)
text = html.unescape(html.unescape(text))
return text.strip()
def whitespace_clean(text):
text = re.sub(r'\s+', ' ', text)
text = text.strip()
return text
class SimpleTokenizer(object):
def __init__(self, bpe_path = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = Path(bpe_path).read_text(encoding='utf8').split('\n')
merges = merges[1:49152 - 256 - 2 + 1]
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v + '</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.vocab_size = 49408
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(
r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
re.IGNORECASE)
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token[:-1]) + (token[-1] + '</w>',)
pairs = get_pairs(word)
if not pairs:
return token + '</w>'
while True:
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
new_word.append(first + second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
text = whitespace_clean(basic_clean(text)).lower()
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens, remove_start_end = True, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
if remove_start_end:
tokens = [token for token in tokens if token not in (49406, 40407, 0)]
text = ''.join([self.decoder[token] for token in tokens if token not in pad_tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
return text
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
tokenizer = SimpleTokenizer()
# huggingface tokenizer
class HugTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = Tokenizer.from_file(str(bpe_path))
tokenizer.post_processor = ByteLevel(trim_offsets = True)
self.tokenizer = tokenizer
self.vocab_size = tokenizer.get_vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens, skip_special_tokens = True)
def encode(self, text):
return self.tokenizer.encode(text).ids
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# chinese tokenizer
class ChineseTokenizer:
def __init__(self):
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
ignore_ids = pad_tokens.union({0})
tokens = [token for token in tokens if token not in ignore_ids]
return self.tokenizer.decode(tokens)
def encode(self, text):
return torch.tensor(self.tokenizer.encode(text, add_special_tokens = False))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = [self.encode(text) for text in texts]
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
# yttm tokenizer
class YttmTokenizer:
def __init__(self, bpe_path = None):
bpe_path = Path(bpe_path)
assert bpe_path.exists(), f'BPE json path {str(bpe_path)} does not exist'
tokenizer = yttm.BPE(model = str(bpe_path))
self.tokenizer = tokenizer
self.vocab_size = tokenizer.vocab_size()
def decode(self, tokens, pad_tokens = {}):
if torch.is_tensor(tokens):
tokens = tokens.tolist()
return self.tokenizer.decode(tokens, ignore_ids = pad_tokens.union({0}))
def encode(self, texts):
encoded = self.tokenizer.encode(texts, output_type = yttm.OutputType.ID)
return list(map(torch.tensor, encoded))
def tokenize(self, texts, context_length = 256, truncate_text = False):
if isinstance(texts, str):
texts = [texts]
all_tokens = self.encode(texts)
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
for i, tokens in enumerate(all_tokens):
if len(tokens) > context_length:
if truncate_text:
tokens = tokens[:context_length]
else:
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
result[i, :len(tokens)] = torch.tensor(tokens)
return result
| jules-samaran | 01e402e4001d8075004c85b07b12429b8a01e822 | fd931e16925bc1844277be83b96c19d13ab6f196 | Good idea, I've just fixed that | jules-samaran | 4 |
lucidrains/DALLE-pytorch | 320 | stable_softmax, wanb_entity, visible discord, replace buggy colab | edit: alright rom1504 is being awesome and implementing things the proper modular way for us. I'm gonna focus this PR on a few outstanding issues
> Seems the CompVis team hasn't updated their PyPi because their latest `pip` wheel still doesn't contain the necessary `GumbelVQ` class. I've had to install this as a submodule to taming-transformers to get it to work which doesnt feel quite right. | null | 2021-06-26 19:35:03+00:00 | 2021-06-30 17:15:24+00:00 | README.md | <img src="./images/birds.png" width="500px"></img>
** current best, trained by <a href="https://github.com/kobiso">Kobiso</a> **
## DALL-E in Pytorch
Implementation / replication of <a href="https://openai.com/blog/dall-e/">DALL-E</a> (<a href="https://arxiv.org/abs/2102.12092">paper</a>), OpenAI's Text to Image Transformer, in Pytorch. It will also contain <a href="https://openai.com/blog/clip/">CLIP</a> for ranking the generations.
<a href="https://github.com/sdtblck">Sid</a>, <a href="http://github.com/kingoflolz">Ben</a>, and <a href="https://github.com/AranKomat">Aran</a> over at <a href="https://www.eleuther.ai/">Eleuther AI</a> are working on <a href="https://github.com/EleutherAI/DALLE-mtf">DALL-E for Mesh Tensorflow</a>! Please lend them a hand if you would like to see DALL-E trained on TPUs.
<a href="https://www.youtube.com/watch?v=j4xgkjWlfL4">Yannic Kilcher's video</a>
Before we replicate this, we can settle for <a href="https://github.com/lucidrains/deep-daze">Deep Daze</a> or <a href="https://github.com/lucidrains/big-sleep">Big Sleep</a>
[](https://colab.research.google.com/drive/1dWvA54k4fH8zAmiix3VXbg95uEIMfqQM?usp=sharing) Train in Colab
## Status
- <a href="https://github.com/htoyryla">Hannu</a> has managed to train a small 6 layer DALL-E on a dataset of just 2000 landscape images! (2048 visual tokens)
<img src="./images/landscape.png"></img>
- <a href="https://github.com/kobiso">Kobiso</a>, a research engineer from Naver, has trained on the CUB200 dataset <a href="https://github.com/lucidrains/DALLE-pytorch/discussions/131">here</a>, using full and deepspeed sparse attention
- <a href="https://github.com/afiaka87">afiaka87</a> has managed one epoch using a 32 layer reversible DALL-E <a href="https://github.com/lucidrains/DALLE-pytorch/issues/86#issue-832121328">here</a>
- <a href="https://github.com/robvanvolt">robvanvolt</a> has started a <a href="https://discord.gg/UhR4kKCSp6">Discord channel</a> for replication efforts
- <a href="https://github.com/robvanvolt">TheodoreGalanos</a> has trained on 150k layouts with the following results
<img src="./images/layouts-1.jpg" width="400px"></img>
<img src="./images/layouts-2.jpg" width="400px"></img>
- <a href="https://github.com/rom1504">Rom1504</a> has trained on 50k fashion images with captions with a really small DALL-E (2 layers) for just 24 hours with the following results
<img src="./images/clothing.png" width="500px"></img>
## Install
```bash
$ pip install dalle-pytorch
```
## Usage
Train VAE
```python
import torch
from dalle_pytorch import DiscreteVAE
vae = DiscreteVAE(
image_size = 256,
num_layers = 3, # number of downsamples - ex. 256 / (2 ** 3) = (32 x 32 feature map)
num_tokens = 8192, # number of visual tokens. in the paper, they used 8192, but could be smaller for downsized projects
codebook_dim = 512, # codebook dimension
hidden_dim = 64, # hidden dimension
num_resnet_blocks = 1, # number of resnet blocks
temperature = 0.9, # gumbel softmax temperature, the lower this is, the harder the discretization
straight_through = False, # straight-through for gumbel softmax. unclear if it is better one way or the other
)
images = torch.randn(4, 3, 256, 256)
loss = vae(images, return_loss = True)
loss.backward()
# train with a lot of data to learn a good codebook
```
Train DALL-E with pretrained VAE from above
```python
import torch
from dalle_pytorch import DiscreteVAE, DALLE
vae = DiscreteVAE(
image_size = 256,
num_layers = 3,
num_tokens = 8192,
codebook_dim = 1024,
hidden_dim = 64,
num_resnet_blocks = 1,
temperature = 0.9
)
dalle = DALLE(
dim = 1024,
vae = vae, # automatically infer (1) image sequence length and (2) number of image tokens
num_text_tokens = 10000, # vocab size for text
text_seq_len = 256, # text sequence length
depth = 12, # should aim to be 64
heads = 16, # attention heads
dim_head = 64, # attention head dimension
attn_dropout = 0.1, # attention dropout
ff_dropout = 0.1 # feedforward dropout
)
text = torch.randint(0, 10000, (4, 256))
images = torch.randn(4, 3, 256, 256)
mask = torch.ones_like(text).bool()
loss = dalle(text, images, mask = mask, return_loss = True)
loss.backward()
# do the above for a long time with a lot of data ... then
images = dalle.generate_images(text, mask = mask)
images.shape # (4, 3, 256, 256)
```
To prime with a starting crop of an image, simply pass two more arguments
```python
img_prime = torch.randn(4, 3, 256, 256)
images = dalle.generate_images(
text,
mask = mask,
img = img_prime,
num_init_img_tokens = (14 * 32) # you can set the size of the initial crop, defaults to a little less than ~1/2 of the tokens, as done in the paper
)
images.shape # (4, 3, 256, 256)
```
## OpenAI's Pretrained VAE
You can also skip the training of the VAE altogether, using the pretrained model released by OpenAI! The wrapper class should take care of downloading and caching the model for you auto-magically.
```python
import torch
from dalle_pytorch import OpenAIDiscreteVAE, DALLE
vae = OpenAIDiscreteVAE() # loads pretrained OpenAI VAE
dalle = DALLE(
dim = 1024,
vae = vae, # automatically infer (1) image sequence length and (2) number of image tokens
num_text_tokens = 10000, # vocab size for text
text_seq_len = 256, # text sequence length
depth = 1, # should aim to be 64
heads = 16, # attention heads
dim_head = 64, # attention head dimension
attn_dropout = 0.1, # attention dropout
ff_dropout = 0.1 # feedforward dropout
)
text = torch.randint(0, 10000, (4, 256))
images = torch.randn(4, 3, 256, 256)
mask = torch.ones_like(text).bool()
loss = dalle(text, images, mask = mask, return_loss = True)
loss.backward()
```
## Taming Transformer's Pretrained VQGAN VAE
You can also use the pretrained VAE offered by the authors of <a href="https://github.com/CompVis/taming-transformers">Taming Transformers</a>! Currently only the VAE with a codebook size of 1024 is offered, with the hope that it may train a little faster than OpenAI's, which has a size of 8192.
In contrast to OpenAI's VAE, it also has an extra layer of downsampling, so the image sequence length is 256 instead of 1024 (this will lead to a 16 reduction in training costs, when you do the math). Whether it will generalize as well as the original DALL-E is up to the citizen scientists out there to discover.
Update - <a href="https://github.com/lucidrains/DALLE-pytorch/discussions/131">it works!</a>
```python
from dalle_pytorch import VQGanVAE
vae = VQGanVAE()
# the rest is the same as the above example
```
The default VQGan is the codebook size 1024 one trained on imagenet. If you wish to use a different one, you can use the `vqgan_model_path` and `vqgan_config_path` to pass the .ckpt file and the .yaml file. These options can be used both in train-dalle script or as argument of VQGanVAE class. Other pretrained VQGAN can be found in [taming transformers readme](https://github.com/CompVis/taming-transformers#overview-of-pretrained-models). If you want to train a custom one you can [follow this guide](https://github.com/CompVis/taming-transformers/pull/54)
## Ranking the generations
Train CLIP
```python
import torch
from dalle_pytorch import CLIP
clip = CLIP(
dim_text = 512,
dim_image = 512,
dim_latent = 512,
num_text_tokens = 10000,
text_enc_depth = 6,
text_seq_len = 256,
text_heads = 8,
num_visual_tokens = 512,
visual_enc_depth = 6,
visual_image_size = 256,
visual_patch_size = 32,
visual_heads = 8
)
text = torch.randint(0, 10000, (4, 256))
images = torch.randn(4, 3, 256, 256)
mask = torch.ones_like(text).bool()
loss = clip(text, images, text_mask = mask, return_loss = True)
loss.backward()
```
To get the similarity scores from your trained Clipper, just do
```python
images, scores = dalle.generate_images(text, mask = mask, clip = clip)
scores.shape # (2,)
images.shape # (2, 3, 256, 256)
# do your topk here, in paper they sampled 512 and chose top 32
```
Or you can just use the official <a href="https://github.com/openai/CLIP">CLIP model</a> to rank the images from DALL-E
## Scaling depth
In the blog post, they used 64 layers to achieve their results. I added reversible networks, from the <a href="https://github.com/lucidrains/reformer-pytorch">Reformer</a> paper, in order for users to attempt to scale depth at the cost of compute. Reversible networks allow you to scale to any depth at no memory cost, but a little over 2x compute cost (each layer is rerun on the backward pass).
Simply set the `reversible` keyword to `True` for the `DALLE` class
```python
dalle = DALLE(
dim = 1024,
vae = vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth = 64,
heads = 16,
reversible = True # <-- reversible networks https://arxiv.org/abs/2001.04451
)
```
## Sparse Attention
The blogpost alluded to a mixture of different types of sparse attention, used mainly on the image (while the text presumably had full causal attention). I have done my best to replicate these types of sparse attention, on the scant details released. Primarily, it seems as though they are doing causal axial row / column attention, combined with a causal convolution-like attention.
By default `DALLE` will use full attention for all layers, but you can specify the attention type per layer as follows.
- `full` full attention
- `axial_row` axial attention, along the rows of the image feature map
- `axial_col` axial attention, along the columns of the image feature map
- `conv_like` convolution-like attention, for the image feature map
The sparse attention only applies to the image. Text will always receive full attention, as said in the blogpost.
```python
dalle = DALLE(
dim = 1024,
vae = vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth = 64,
heads = 16,
reversible = True,
attn_types = ('full', 'axial_row', 'axial_col', 'conv_like') # cycles between these four types of attention
)
```
## Deepspeed Sparse Attention
You can also train with Microsoft Deepspeed's <a href="https://www.deepspeed.ai/news/2020/09/08/sparse-attention.html">Sparse Attention</a>, with any combination of dense and sparse attention that you'd like. However, you will have to endure the installation process.
First, you need to install Deepspeed with Sparse Attention
```bash
$ sh install_deepspeed.sh
```
Next, you need to install the pip package `triton`
```bash
$ pip install triton
```
If both of the above succeeded, now you can train with Sparse Attention!
```python
dalle = DALLE(
dim = 512,
vae = vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth = 64,
heads = 8,
attn_types = ('full', 'sparse') # interleave sparse and dense attention for 64 layers
)
```
## Training
This section will outline how to train the discrete variational autoencoder as well as the final multi-modal transformer (DALL-E). We are going to use <a href="https://wandb.ai/">Weights & Biases</a> for all the experiment tracking.
(You can also do everything in this section in a Google Colab, link below)
[](https://colab.research.google.com/drive/1dWvA54k4fH8zAmiix3VXbg95uEIMfqQM?usp=sharing) Train in Colab
```bash
$ pip install wandb
```
Followed by
```bash
$ wandb login
```
### VAE
To train the VAE, you just need to run
```python
$ python train_vae.py --image_folder /path/to/your/images
```
If you installed everything correctly, a link to the experiments page should show up in your terminal. You can follow your link there and customize your experiment, like the example layout below.
<img src="./images/wb.png" width="700px"></img>
You can of course open up the training script at `./train_vae.py`, where you can modify the constants, what is passed to Weights & Biases, or any other tricks you know to make the VAE learn better.
Model will be saved periodically to `./vae.pt`
In the experiment tracker, you will have to monitor the hard reconstruction, as we are essentially teaching the network to compress images into discrete visual tokens for use in the transformer as a visual vocabulary.
Weights and Biases will allow you to monitor the temperature annealing, image reconstructions (encoder and decoder working properly), as well as to watch out for codebook collapse (where the network decides to only use a few tokens out of what you provide it).
Once you have trained a decent VAE to your satisfaction, you can move on to the next step with your model weights at `./vae.pt`.
### DALL-E Training
## Training using an Image-Text-Folder
Now you just have to invoke the `./train_dalle.py` script, indicating which VAE model you would like to use, as well as the path to your folder if images and text.
The dataset I am currently working with contains a folder of images and text files, arbitraily nested in subfolders, where text file name corresponds with the image name, and where each text file contains multiple descriptions, delimited by newlines. The script will find and pair all the image and text files with the same names, and randomly select one of the textual descriptions during batch creation.
ex.
```
📂image-and-text-data
┣ 📜cat.png
┣ 📜cat.txt
┣ 📜dog.jpg
┣ 📜dog.txt
┣ 📜turtle.jpeg
┗ 📜turtle.txt
```
ex. `cat.txt`
```text
A black and white cat curled up next to the fireplace
A fireplace, with a cat sleeping next to it
A black cat with a red collar napping
```
If you have a dataset with its own directory structure for tying together image and text descriptions, do let me know in the issues, and I'll see if I can accommodate it in the script.
```python
$ python train_dalle.py --vae_path ./vae.pt --image_text_folder /path/to/data
```
You likely will not finish DALL-E training as quickly as you did your Discrete VAE. To resume from where you left off, just run the same script, but with the path to your DALL-E checkpoints.
```python
$ python train_dalle.py --dalle_path ./dalle.pt --image_text_folder /path/to/data
```
## Training using WebDataset
WebDataset files are regular .tar(.gz) files which can be streamed and used for DALLE-pytorch training.
You Just need to provide the image (first comma separated argument) and caption (second comma separated argument)
column key after the --wds argument. The ---image_text_folder points to your .tar(.gz) file instead of the datafolder.
```python
$ python train_dalle.py --wds img,cap --image_text_folder /path/to/data.tar(.gz)
```
Distributed training with deepspeed works the same way, e.g.:
```python
$ deepspeed train_dalle.py --wds img,cap --image_text_folder /path/to/data.tar(.gz) --fp16 --deepspeed
```
If you have containing shards (dataset split into several .tar(.gz) files), this is also supported:
```python
$ deepspeed train_dalle.py --wds img,cap --image_text_folder /path/to/shardfolder --fp16 --deepspeed
```
You can stream the data from a http server or gloogle cloud storage like this:
```python
$ deepspeed train_dalle.py --image_text_folder "http://storage.googleapis.com/nvdata-openimages/openimages-train-{000000..000554}.tar" --wds jpg,json --taming --truncate_captions --random_resize_crop_lower_ratio=0.8 --attn_types=full --epochs=2 --fp16 --deepspeed
```
In order to convert your image-text-folder to WebDataset format, you can make use of one of several methods.
(https://www.youtube.com/watch?v=v_PacO-3OGQ here are given 4 examples, or a little helper script which also supports splitting your dataset
into shards of .tar.gz files https://github.com/robvanvolt/DALLE-datasets/blob/main/wds_create_shards.py)
### DALL-E with OpenAI's VAE
You can now also train DALL-E without having to train the Discrete VAE at all, courtesy to their open-sourcing their model. You simply have to invoke the `train_dalle.py` script without specifying the `--vae_path`
```python
$ python train_dalle.py --image_text_folder /path/to/coco/dataset
```
### DALL-E with Taming Transformer's VQVAE
Just use the `--taming` flag. Highly recommended you use this VAE over the OpenAI one!
```python
$ python train_dalle.py --image_text_folder /path/to/coco/dataset --taming
```
### Generation
Once you have successfully trained DALL-E, you can then use the saved model for generation!
```python
$ python generate.py --dalle_path ./dalle.pt --text 'fireflies in a field under a full moon'
```
You should see your images saved as `./outputs/{your prompt}/{image number}.jpg`
To generate multiple images, just pass in your text with '|' character as a separator.
ex.
```python
$ python generate.py --dalle_path ./dalle.pt --text 'a dog chewing a bone|a cat chasing mice|a frog eating a fly'
```
### Docker
You can use a docker container to make sure the version of Pytorch and Cuda are correct for training DALL-E. <a href="https://docs.docker.com/get-docker/">Docker</a> and <a href='#'>Docker Container Runtime</a> should be installed.
To build:
```bash
docker build -t dalle docker
```
To run in an interactive shell:
```bash
docker run --gpus all -it --mount src="$(pwd)",target=/workspace/dalle,type=bind dalle:latest bash
```
### Distributed Training
#### DeepSpeed
Thanks to <a href="https://github.com/janEbert">janEbert</a>, the repository is now equipped so you can train DALL-E with Microsoft's <a href="https://www.deepspeed.ai/">Deepspeed</a>!
You can simply replace any `$ python <file>.py [args...]` command with
```sh
$ deepspeed <file>.py [args...] --deepspeed
```
to use the aforementioned DeepSpeed library for distributed training, speeding up your experiments.
Modify the `deepspeed_config` dictionary in `train_dalle.py` or
`train_vae.py` according to the DeepSpeed settings you'd like to use
for each one. See the [DeepSpeed configuration
docs](https://www.deepspeed.ai/docs/config-json/) for more
information.
#### DeepSpeed - 32 and 16 bit Precision
As of DeepSpeed version 0.3.16, ZeRO optimizations can be used with
single-precision floating point numbers. If you are using an older
version, you'll have to pass the `--fp16` flag to be able to enable
ZeRO optimizations.
#### DeepSpeed - Apex Automatic Mixed Precision.
Automatic mixed precision is a stable alternative to fp16 which still provides a decent speedup.
In order to run with Apex AMP (through DeepSpeed), you will need to install DeepSpeed using either the Dockerfile or the bash script.
Then you will need to install apex from source.
This may take awhile and you may see some compilation warnings which can be ignored.
```sh
sh install_apex.sh
```
Now, run `train_dalle.py` with `deepspeed` instead of `python` as done here:
```sh
deepspeed train_dalle.py \
--taming \
--image_text_folder 'DatasetsDir' \
--distr_backend 'deepspeed' \
--amp
```
#### Horovod
[Horovod](https://horovod.ai) offers a stable way for data parallel
training.
After [installing
Horovod](https://github.com/lucidrains/DALLE-pytorch/wiki/Horovod-Installation),
replace any `$ python <file>.py [args...]` command with
```sh
$ horovodrun -np <num-gpus> <file>.py [args...] --distributed_backend horovod
```
to use the Horovod library for distributed training, speeding up your
experiments. This will multiply your effective batch size per training
step by `<num-gpus>`, so you may need to rescale the learning rate
accordingly.
#### Custom Tokenizer
This repository supports custom tokenization with <a href="https://github.com/VKCOM/YouTokenToMe">YouTokenToMe</a>, if you wish to use it instead of the default simple tokenizer. Simply pass in an extra `--bpe_path` when invoking `train_dalle.py` and `generate.py`, with the path to your BPE model file.
The only requirement is that you use `0` as the padding during tokenization
ex.
```sh
$ python train_dalle.py --image_text_folder ./path/to/data --bpe_path ./path/to/bpe.model
```
To create a BPE model file from scratch, firstly
```bash
$ pip install youtokentome
```
Then you need to prepare a big text file that is a representative sample of the type of text you want to encode. You can then invoke the `youtokentome` command-line tools. You'll also need to specify the vocab size you wish to use, in addition to the corpus of text.
```bash
$ yttm bpe --vocab_size 8000 --data ./path/to/big/text/file.txt --model ./path/to/bpe.model
```
That's it! The BPE model file is now saved to `./path/to/bpe.model` and you can begin training!
#### Chinese
You can train with a <a href="https://huggingface.co/bert-base-chinese">pretrained chinese tokenizer</a> offered by Huggingface 🤗 by simply passing in an extra flag `--chinese`
ex.
```sh
$ python train_dalle.py --chinese --image_text_folder ./path/to/data
```
```sh
$ python generate.py --chinese --text '追老鼠的猫'
```
## Citations
```bibtex
@misc{ramesh2021zeroshot,
title = {Zero-Shot Text-to-Image Generation},
author = {Aditya Ramesh and Mikhail Pavlov and Gabriel Goh and Scott Gray and Chelsea Voss and Alec Radford and Mark Chen and Ilya Sutskever},
year = {2021},
eprint = {2102.12092},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
```bibtex
@misc{unpublished2021clip,
title = {CLIP: Connecting Text and Images},
author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},
year = {2021}
}
```
```bibtex
@misc{kitaev2020reformer,
title = {Reformer: The Efficient Transformer},
author = {Nikita Kitaev and Łukasz Kaiser and Anselm Levskaya},
year = {2020},
eprint = {2001.04451},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
}
```
```bibtex
@misc{esser2021taming,
title = {Taming Transformers for High-Resolution Image Synthesis},
author = {Patrick Esser and Robin Rombach and Björn Ommer},
year = {2021},
eprint = {2012.09841},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
```bibtex
@misc{ding2021cogview,
title = {CogView: Mastering Text-to-Image Generation via Transformers},
author = {Ming Ding and Zhuoyi Yang and Wenyi Hong and Wendi Zheng and Chang Zhou and Da Yin and Junyang Lin and Xu Zou and Zhou Shao and Hongxia Yang and Jie Tang},
year = {2021},
eprint = {2105.13290},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
*Those who do not want to imitate anything, produce nothing.* - Dali
| # DALL-E in Pytorch
<p align='center'>
<a href="https://colab.research.google.com/gist/afiaka87/b29213684a1dd633df20cab49d05209d/train_dalle_pytorch.ipynb">
<img alt="Train DALL-E w/ DeepSpeed" src="https://colab.research.google.com/assets/colab-badge.svg">
</a>
<a href="https://discord.gg/dall-e"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a></br>
<a href="https://github.com/robvanvolt/DALLE-models">Released DALLE Models</a></br>
<a href="https://github.com/rom1504/dalle-service">Web-Hostable DALLE Checkpoints</a></br>
<a href="https://www.youtube.com/watch?v=j4xgkjWlfL4">Yannic Kilcher's video</a>
<p>
Implementation / replication of <a href="https://openai.com/blog/dall-e/">DALL-E</a> (<a href="https://arxiv.org/abs/2102.12092">paper</a>), OpenAI's Text to Image Transformer, in Pytorch. It will also contain <a href="https://openai.com/blog/clip/">CLIP</a> for ranking the generations.
---
[Quick Start](https://github.com/lucidrains/DALLE-pytorch/wiki)
<a href="https://github.com/lucidrains/deep-daze">Deep Daze</a> or <a href="https://github.com/lucidrains/big-sleep">Big Sleep</a> are great alternatives!
## Status
<p align='center'>
- <a href="https://github.com/htoyryla">Hannu</a> has managed to train a small 6 layer DALL-E on a dataset of just 2000 landscape images! (2048 visual tokens)
<img src="./images/landscape.png"></img>
- <a href="https://github.com/kobiso">Kobiso</a>, a research engineer from Naver, has trained on the CUB200 dataset <a href="https://github.com/lucidrains/DALLE-pytorch/discussions/131">here</a>, using full and deepspeed sparse attention
<img src="./images/birds.png" width="256"></img>
- (3/15/21) <a href="https://github.com/afiaka87">afiaka87</a> has managed one epoch using a reversible DALL-E and the dVaE <a href="https://github.com/lucidrains/DALLE-pytorch/issues/86#issue-832121328">here</a>
- <a href="https://github.com/robvanvolt">TheodoreGalanos</a> has trained on 150k layouts with the following results
<p>
<img src="./images/layouts-1.jpg" width="256"></img>
<img src="./images/layouts-2.jpg" width="256"></img>
</p>
- <a href="https://github.com/rom1504">Rom1504</a> has trained on 50k fashion images with captions with a really small DALL-E (2 layers) for just 24 hours with the following results
<p/>
<img src="./images/clothing.png" width="420"></img>
- <a href="https://github.com/afiaka87">afiaka87</a> trained for 6 epochs on the same dataset as before thanks to the efficient 16k VQGAN with the following <a href="https://github.com/lucidrains/DALLE-pytorch/discussions/322>discussion">results</a>
<p align='centered'>
<img src="https://user-images.githubusercontent.com/3994972/123564891-b6f18780-d780-11eb-9019-8a1b6178f861.png" width="420" alt-text='a photo of westwood park, san francisco, from the water in the afternoon'></img>
<img src="https://user-images.githubusercontent.com/3994972/123564776-4c404c00-d780-11eb-9c8e-3356df358df3.png" width="420" alt-text='a female mannequin dressed in an olive button-down shirt and gold palazzo pants'> </img>
</p>
Thanks to the amazing "mega b#6696" you can generate from this checkpoint in colab -
<a href="https://colab.research.google.com/drive/11V2xw1eLPfZvzW8UQyTUhqCEU71w6Pr4?usp=sharing">
<img alt="Run inference on the Afiaka checkpoint in Colab" src="https://colab.research.google.com/assets/colab-badge.svg">
</a>
## Install
```bash
$ pip install dalle-pytorch
```
## Usage
Train VAE
```python
import torch
from dalle_pytorch import DiscreteVAE
vae = DiscreteVAE(
image_size = 256,
num_layers = 3, # number of downsamples - ex. 256 / (2 ** 3) = (32 x 32 feature map)
num_tokens = 8192, # number of visual tokens. in the paper, they used 8192, but could be smaller for downsized projects
codebook_dim = 512, # codebook dimension
hidden_dim = 64, # hidden dimension
num_resnet_blocks = 1, # number of resnet blocks
temperature = 0.9, # gumbel softmax temperature, the lower this is, the harder the discretization
straight_through = False, # straight-through for gumbel softmax. unclear if it is better one way or the other
)
images = torch.randn(4, 3, 256, 256)
loss = vae(images, return_loss = True)
loss.backward()
# train with a lot of data to learn a good codebook
```
Train DALL-E with pretrained VAE from above
```python
import torch
from dalle_pytorch import DiscreteVAE, DALLE
vae = DiscreteVAE(
image_size = 256,
num_layers = 3,
num_tokens = 8192,
codebook_dim = 1024,
hidden_dim = 64,
num_resnet_blocks = 1,
temperature = 0.9
)
dalle = DALLE(
dim = 1024,
vae = vae, # automatically infer (1) image sequence length and (2) number of image tokens
num_text_tokens = 10000, # vocab size for text
text_seq_len = 256, # text sequence length
depth = 12, # should aim to be 64
heads = 16, # attention heads
dim_head = 64, # attention head dimension
attn_dropout = 0.1, # attention dropout
ff_dropout = 0.1 # feedforward dropout
)
text = torch.randint(0, 10000, (4, 256))
images = torch.randn(4, 3, 256, 256)
mask = torch.ones_like(text).bool()
loss = dalle(text, images, mask = mask, return_loss = True)
loss.backward()
# do the above for a long time with a lot of data ... then
images = dalle.generate_images(text, mask = mask)
images.shape # (4, 3, 256, 256)
```
To prime with a starting crop of an image, simply pass two more arguments
```python
img_prime = torch.randn(4, 3, 256, 256)
images = dalle.generate_images(
text,
mask = mask,
img = img_prime,
num_init_img_tokens = (14 * 32) # you can set the size of the initial crop, defaults to a little less than ~1/2 of the tokens, as done in the paper
)
images.shape # (4, 3, 256, 256)
```
## OpenAI's Pretrained VAE
You can also skip the training of the VAE altogether, using the pretrained model released by OpenAI! The wrapper class should take care of downloading and caching the model for you auto-magically.
```python
import torch
from dalle_pytorch import OpenAIDiscreteVAE, DALLE
vae = OpenAIDiscreteVAE() # loads pretrained OpenAI VAE
dalle = DALLE(
dim = 1024,
vae = vae, # automatically infer (1) image sequence length and (2) number of image tokens
num_text_tokens = 10000, # vocab size for text
text_seq_len = 256, # text sequence length
depth = 1, # should aim to be 64
heads = 16, # attention heads
dim_head = 64, # attention head dimension
attn_dropout = 0.1, # attention dropout
ff_dropout = 0.1 # feedforward dropout
)
text = torch.randint(0, 10000, (4, 256))
images = torch.randn(4, 3, 256, 256)
mask = torch.ones_like(text).bool()
loss = dalle(text, images, mask = mask, return_loss = True)
loss.backward()
```
## Taming Transformer's Pretrained VQGAN VAE
You can also use the pretrained VAE offered by the authors of <a href="https://github.com/CompVis/taming-transformers">Taming Transformers</a>! Currently only the VAE with a codebook size of 1024 is offered, with the hope that it may train a little faster than OpenAI's, which has a size of 8192.
In contrast to OpenAI's VAE, it also has an extra layer of downsampling, so the image sequence length is 256 instead of 1024 (this will lead to a 16 reduction in training costs, when you do the math). Whether it will generalize as well as the original DALL-E is up to the citizen scientists out there to discover.
Update - <a href="https://github.com/lucidrains/DALLE-pytorch/discussions/131">it works!</a>
```python
from dalle_pytorch import VQGanVAE
vae = VQGanVAE()
# the rest is the same as the above example
```
The default VQGan is the codebook size 1024 one trained on imagenet. If you wish to use a different one, you can use the `vqgan_model_path` and `vqgan_config_path` to pass the .ckpt file and the .yaml file. These options can be used both in train-dalle script or as argument of VQGanVAE class. Other pretrained VQGAN can be found in [taming transformers readme](https://github.com/CompVis/taming-transformers#overview-of-pretrained-models). If you want to train a custom one you can [follow this guide](https://github.com/CompVis/taming-transformers/pull/54)
## Ranking the generations
Train CLIP
```python
import torch
from dalle_pytorch import CLIP
clip = CLIP(
dim_text = 512,
dim_image = 512,
dim_latent = 512,
num_text_tokens = 10000,
text_enc_depth = 6,
text_seq_len = 256,
text_heads = 8,
num_visual_tokens = 512,
visual_enc_depth = 6,
visual_image_size = 256,
visual_patch_size = 32,
visual_heads = 8
)
text = torch.randint(0, 10000, (4, 256))
images = torch.randn(4, 3, 256, 256)
mask = torch.ones_like(text).bool()
loss = clip(text, images, text_mask = mask, return_loss = True)
loss.backward()
```
To get the similarity scores from your trained Clipper, just do
```python
images, scores = dalle.generate_images(text, mask = mask, clip = clip)
scores.shape # (2,)
images.shape # (2, 3, 256, 256)
# do your topk here, in paper they sampled 512 and chose top 32
```
Or you can just use the official <a href="https://github.com/openai/CLIP">CLIP model</a> to rank the images from DALL-E
## Scaling depth
In the blog post, they used 64 layers to achieve their results. I added reversible networks, from the <a href="https://github.com/lucidrains/reformer-pytorch">Reformer</a> paper, in order for users to attempt to scale depth at the cost of compute. Reversible networks allow you to scale to any depth at no memory cost, but a little over 2x compute cost (each layer is rerun on the backward pass).
Simply set the `reversible` keyword to `True` for the `DALLE` class
```python
dalle = DALLE(
dim = 1024,
vae = vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth = 64,
heads = 16,
reversible = True # <-- reversible networks https://arxiv.org/abs/2001.04451
)
```
## Sparse Attention
The blogpost alluded to a mixture of different types of sparse attention, used mainly on the image (while the text presumably had full causal attention). I have done my best to replicate these types of sparse attention, on the scant details released. Primarily, it seems as though they are doing causal axial row / column attention, combined with a causal convolution-like attention.
By default `DALLE` will use full attention for all layers, but you can specify the attention type per layer as follows.
- `full` full attention
- `axial_row` axial attention, along the rows of the image feature map
- `axial_col` axial attention, along the columns of the image feature map
- `conv_like` convolution-like attention, for the image feature map
The sparse attention only applies to the image. Text will always receive full attention, as said in the blogpost.
```python
dalle = DALLE(
dim = 1024,
vae = vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth = 64,
heads = 16,
reversible = True,
attn_types = ('full', 'axial_row', 'axial_col', 'conv_like') # cycles between these four types of attention
)
```
## Deepspeed Sparse Attention
You can also train with Microsoft Deepspeed's <a href="https://www.deepspeed.ai/news/2020/09/08/sparse-attention.html">Sparse Attention</a>, with any combination of dense and sparse attention that you'd like. However, you will have to endure the installation process.
First, you need to install Deepspeed with Sparse Attention
```bash
$ sh install_deepspeed.sh
```
Next, you need to install the pip package `triton`
```bash
$ pip install triton
```
If both of the above succeeded, now you can train with Sparse Attention!
```python
dalle = DALLE(
dim = 512,
vae = vae,
num_text_tokens = 10000,
text_seq_len = 256,
depth = 64,
heads = 8,
attn_types = ('full', 'sparse') # interleave sparse and dense attention for 64 layers
)
```
## Training
This section will outline how to train the discrete variational autoencoder as well as the final multi-modal transformer (DALL-E). We are going to use <a href="https://wandb.ai/">Weights & Biases</a> for all the experiment tracking.
(You can also do everything in this section in a Google Colab, link below)
[](https://colab.research.google.com/drive/1dWvA54k4fH8zAmiix3VXbg95uEIMfqQM?usp=sharing) Train in Colab
```bash
$ pip install wandb
```
Followed by
```bash
$ wandb login
```
### VAE
To train the VAE, you just need to run
```python
$ python train_vae.py --image_folder /path/to/your/images
```
If you installed everything correctly, a link to the experiments page should show up in your terminal. You can follow your link there and customize your experiment, like the example layout below.
<img src="./images/wb.png" width="700px"></img>
You can of course open up the training script at `./train_vae.py`, where you can modify the constants, what is passed to Weights & Biases, or any other tricks you know to make the VAE learn better.
Model will be saved periodically to `./vae.pt`
In the experiment tracker, you will have to monitor the hard reconstruction, as we are essentially teaching the network to compress images into discrete visual tokens for use in the transformer as a visual vocabulary.
Weights and Biases will allow you to monitor the temperature annealing, image reconstructions (encoder and decoder working properly), as well as to watch out for codebook collapse (where the network decides to only use a few tokens out of what you provide it).
Once you have trained a decent VAE to your satisfaction, you can move on to the next step with your model weights at `./vae.pt`.
### DALL-E Training
## Training using an Image-Text-Folder
Now you just have to invoke the `./train_dalle.py` script, indicating which VAE model you would like to use, as well as the path to your folder if images and text.
The dataset I am currently working with contains a folder of images and text files, arbitraily nested in subfolders, where text file name corresponds with the image name, and where each text file contains multiple descriptions, delimited by newlines. The script will find and pair all the image and text files with the same names, and randomly select one of the textual descriptions during batch creation.
ex.
```
📂image-and-text-data
┣ 📜cat.png
┣ 📜cat.txt
┣ 📜dog.jpg
┣ 📜dog.txt
┣ 📜turtle.jpeg
┗ 📜turtle.txt
```
ex. `cat.txt`
```text
A black and white cat curled up next to the fireplace
A fireplace, with a cat sleeping next to it
A black cat with a red collar napping
```
If you have a dataset with its own directory structure for tying together image and text descriptions, do let me know in the issues, and I'll see if I can accommodate it in the script.
```python
$ python train_dalle.py --vae_path ./vae.pt --image_text_folder /path/to/data
```
You likely will not finish DALL-E training as quickly as you did your Discrete VAE. To resume from where you left off, just run the same script, but with the path to your DALL-E checkpoints.
```python
$ python train_dalle.py --dalle_path ./dalle.pt --image_text_folder /path/to/data
```
## Training using WebDataset
WebDataset files are regular .tar(.gz) files which can be streamed and used for DALLE-pytorch training.
You Just need to provide the image (first comma separated argument) and caption (second comma separated argument)
column key after the --wds argument. The ---image_text_folder points to your .tar(.gz) file instead of the datafolder.
```python
$ python train_dalle.py --wds img,cap --image_text_folder /path/to/data.tar(.gz)
```
Distributed training with deepspeed works the same way, e.g.:
```python
$ deepspeed train_dalle.py --wds img,cap --image_text_folder /path/to/data.tar(.gz) --fp16 --deepspeed
```
If you have containing shards (dataset split into several .tar(.gz) files), this is also supported:
```python
$ deepspeed train_dalle.py --wds img,cap --image_text_folder /path/to/shardfolder --fp16 --deepspeed
```
You can stream the data from a http server or gloogle cloud storage like this:
```python
$ deepspeed train_dalle.py --image_text_folder "http://storage.googleapis.com/nvdata-openimages/openimages-train-{000000..000554}.tar" --wds jpg,json --taming --truncate_captions --random_resize_crop_lower_ratio=0.8 --attn_types=full --epochs=2 --fp16 --deepspeed
```
In order to convert your image-text-folder to WebDataset format, you can make use of one of several methods.
(https://www.youtube.com/watch?v=v_PacO-3OGQ here are given 4 examples, or a little helper script which also supports splitting your dataset
into shards of .tar.gz files https://github.com/robvanvolt/DALLE-datasets/blob/main/wds_create_shards.py)
### DALL-E with OpenAI's VAE
You can now also train DALL-E without having to train the Discrete VAE at all, courtesy to their open-sourcing their model. You simply have to invoke the `train_dalle.py` script without specifying the `--vae_path`
```python
$ python train_dalle.py --image_text_folder /path/to/coco/dataset
```
### DALL-E with Taming Transformer's VQVAE
Just use the `--taming` flag. Highly recommended you use this VAE over the OpenAI one!
```python
$ python train_dalle.py --image_text_folder /path/to/coco/dataset --taming
```
### Generation
Once you have successfully trained DALL-E, you can then use the saved model for generation!
```python
$ python generate.py --dalle_path ./dalle.pt --text 'fireflies in a field under a full moon'
```
You should see your images saved as `./outputs/{your prompt}/{image number}.jpg`
To generate multiple images, just pass in your text with '|' character as a separator.
ex.
```python
$ python generate.py --dalle_path ./dalle.pt --text 'a dog chewing a bone|a cat chasing mice|a frog eating a fly'
```
### Docker
You can use a docker container to make sure the version of Pytorch and Cuda are correct for training DALL-E. <a href="https://docs.docker.com/get-docker/">Docker</a> and <a href='#'>Docker Container Runtime</a> should be installed.
To build:
```bash
docker build -t dalle docker
```
To run in an interactive shell:
```bash
docker run --gpus all -it --mount src="$(pwd)",target=/workspace/dalle,type=bind dalle:latest bash
```
### Distributed Training
#### DeepSpeed
Thanks to <a href="https://github.com/janEbert">janEbert</a>, the repository is now equipped so you can train DALL-E with Microsoft's <a href="https://www.deepspeed.ai/">Deepspeed</a>!
You can simply replace any `$ python <file>.py [args...]` command with
```sh
$ deepspeed <file>.py [args...] --deepspeed
```
to use the aforementioned DeepSpeed library for distributed training, speeding up your experiments.
Modify the `deepspeed_config` dictionary in `train_dalle.py` or
`train_vae.py` according to the DeepSpeed settings you'd like to use
for each one. See the [DeepSpeed configuration
docs](https://www.deepspeed.ai/docs/config-json/) for more
information.
#### DeepSpeed - 32 and 16 bit Precision
As of DeepSpeed version 0.3.16, ZeRO optimizations can be used with
single-precision floating point numbers. If you are using an older
version, you'll have to pass the `--fp16` flag to be able to enable
ZeRO optimizations.
#### DeepSpeed - Apex Automatic Mixed Precision.
Automatic mixed precision is a stable alternative to fp16 which still provides a decent speedup.
In order to run with Apex AMP (through DeepSpeed), you will need to install DeepSpeed using either the Dockerfile or the bash script.
Then you will need to install apex from source.
This may take awhile and you may see some compilation warnings which can be ignored.
```sh
sh install_apex.sh
```
Now, run `train_dalle.py` with `deepspeed` instead of `python` as done here:
```sh
deepspeed train_dalle.py \
--taming \
--image_text_folder 'DatasetsDir' \
--distr_backend 'deepspeed' \
--amp
```
#### Horovod
[Horovod](https://horovod.ai) offers a stable way for data parallel
training.
After [installing
Horovod](https://github.com/lucidrains/DALLE-pytorch/wiki/Horovod-Installation),
replace any `$ python <file>.py [args...]` command with
```sh
$ horovodrun -np <num-gpus> <file>.py [args...] --distributed_backend horovod
```
to use the Horovod library for distributed training, speeding up your
experiments. This will multiply your effective batch size per training
step by `<num-gpus>`, so you may need to rescale the learning rate
accordingly.
#### Custom Tokenizer
This repository supports custom tokenization with <a href="https://github.com/VKCOM/YouTokenToMe">YouTokenToMe</a>, if you wish to use it instead of the default simple tokenizer. Simply pass in an extra `--bpe_path` when invoking `train_dalle.py` and `generate.py`, with the path to your BPE model file.
The only requirement is that you use `0` as the padding during tokenization
ex.
```sh
$ python train_dalle.py --image_text_folder ./path/to/data --bpe_path ./path/to/bpe.model
```
To create a BPE model file from scratch, firstly
```bash
$ pip install youtokentome
```
Then you need to prepare a big text file that is a representative sample of the type of text you want to encode. You can then invoke the `youtokentome` command-line tools. You'll also need to specify the vocab size you wish to use, in addition to the corpus of text.
```bash
$ yttm bpe --vocab_size 8000 --data ./path/to/big/text/file.txt --model ./path/to/bpe.model
```
That's it! The BPE model file is now saved to `./path/to/bpe.model` and you can begin training!
#### Chinese
You can train with a <a href="https://huggingface.co/bert-base-chinese">pretrained chinese tokenizer</a> offered by Huggingface 🤗 by simply passing in an extra flag `--chinese`
ex.
```sh
$ python train_dalle.py --chinese --image_text_folder ./path/to/data
```
```sh
$ python generate.py --chinese --text '追老鼠的猫'
```
## Citations
```bibtex
@misc{ramesh2021zeroshot,
title = {Zero-Shot Text-to-Image Generation},
author = {Aditya Ramesh and Mikhail Pavlov and Gabriel Goh and Scott Gray and Chelsea Voss and Alec Radford and Mark Chen and Ilya Sutskever},
year = {2021},
eprint = {2102.12092},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
```bibtex
@misc{unpublished2021clip,
title = {CLIP: Connecting Text and Images},
author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},
year = {2021}
}
```
```bibtex
@misc{kitaev2020reformer,
title = {Reformer: The Efficient Transformer},
author = {Nikita Kitaev and Łukasz Kaiser and Anselm Levskaya},
year = {2020},
eprint = {2001.04451},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
}
```
```bibtex
@misc{esser2021taming,
title = {Taming Transformers for High-Resolution Image Synthesis},
author = {Patrick Esser and Robin Rombach and Björn Ommer},
year = {2021},
eprint = {2012.09841},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
```bibtex
@misc{ding2021cogview,
title = {CogView: Mastering Text-to-Image Generation via Transformers},
author = {Ming Ding and Zhuoyi Yang and Wenyi Hong and Wendi Zheng and Chang Zhou and Da Yin and Junyang Lin and Xu Zou and Zhou Shao and Hongxia Yang and Jie Tang},
year = {2021},
eprint = {2105.13290},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
*Those who do not want to imitate anything, produce nothing.* - Dali
| afiaka87 | 7eb2e34ac07076a5bc99808b38795bb12e285f26 | 5a255eab032bcd821c2038c808b9682e485b3f1a | This shouldn't be all in bold imo (I mean all the text below, not the title) | rom1504 | 5 |
lucidrains/DALLE-pytorch | 320 | stable_softmax, wanb_entity, visible discord, replace buggy colab | edit: alright rom1504 is being awesome and implementing things the proper modular way for us. I'm gonna focus this PR on a few outstanding issues
> Seems the CompVis team hasn't updated their PyPi because their latest `pip` wheel still doesn't contain the necessary `GumbelVQ` class. I've had to install this as a submodule to taming-transformers to get it to work which doesnt feel quite right. | null | 2021-06-26 19:35:03+00:00 | 2021-06-30 17:15:24+00:00 | dalle_pytorch/distributed_backends/distributed_backend.py | """
An abstract backend for distributed deep learning.
Provides several standard utility methods under a common API.
Please check the documentation of the class `DistributedBackend` for
details to implement a new backend.
"""
from importlib import import_module
class DistributedBackend:
"""An abstract backend class for distributed deep learning.
Provides several standard utility methods under a common API.
Variables that must be overridden:
- BACKEND_MODULE_NAME
- BACKEND_NAME
Methods that must be overridden:
- wrap_arg_parser
- _initialize
- _get_world_size
- _get_rank
- _get_local_rank
- _local_barrier
- _distribute
- _average_all
"""
BACKEND_MODULE_NAME = None
"""Name of the module to import for the backend."""
BACKEND_NAME = None
"""Name of the backend for printing."""
ROOT_RANK = 0
backend_module = None
"""The module to access the backend."""
is_initialized = False
"""Whether the backend is initialized."""
def __init__(self):
if self.BACKEND_MODULE_NAME is None:
raise NotImplementedError('BACKEND_MODULE_NAME is not set')
if self.BACKEND_NAME is None:
raise NotImplementedError('BACKEND_NAME is not set')
def has_backend(self):
"""Return whether the backend module is now imported."""
try:
self.backend_module = import_module(self.BACKEND_MODULE_NAME)
except ModuleNotFoundError:
return False
return True
def check_batch_size(self, batch_size):
"""Check whether the batch size makes sense for distribution."""
assert batch_size >= self.get_world_size(), \
(f"batch size can't be smaller than number of processes "
f'({batch_size} < {self.get_world_size()})')
def wrap_arg_parser(parser):
"""Add arguments to support optional distributed backend usage."""
raise NotImplementedError
def initialize(self):
"""Initialize the distributed backend."""
self._initialize()
self.is_initialized = True
def _initialize(self):
"""Initialize the distributed backend."""
raise NotImplementedError
def require_init(self):
"""Raise an error when the backend has not been initialized yet."""
assert self.is_initialized, \
(f'{BACKEND_NAME} backend has not been initialized; please call '
f'`distributed_utils.initialize` at the start of your script to '
f'allow optional distributed usage')
def get_world_size(self):
"""Return the amount of distributed processes."""
self.require_init()
return self._get_world_size()
def _get_world_size(self):
"""Return the amount of distributed processes."""
raise NotImplementedError
def get_rank(self):
"""Return the global rank of the calling worker process."""
self.require_init()
return self._get_rank()
def _get_rank(self):
"""Return the global rank of the calling worker process."""
raise NotImplementedError
def get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
self.require_init()
return self._get_local_rank()
def _get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
raise NotImplementedError
def is_root_worker(self):
"""Return whether the calling worker has the root rank."""
return self.get_rank() == self.ROOT_RANK
def is_local_root_worker(self):
"""Return whether the calling worker has the root rank on this node."""
return self.get_local_rank() == self.ROOT_RANK
def local_barrier(self):
"""Wait until all processes on this node have called this function."""
self.require_init()
self._local_barrier()
def _local_barrier(self):
"""Wait until all processes on this node have called this function."""
raise NotImplementedError
def distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
self.require_init()
return self._distribute(
args,
model,
optimizer,
model_parameters,
training_data,
lr_scheduler,
**kwargs,
)
def _distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
raise NotImplementedError
def average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
self.require_init()
return self._average_all(tensor)
def _average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
raise NotImplementedError
| """
An abstract backend for distributed deep learning.
Provides several standard utility methods under a common API.
Please check the documentation of the class `DistributedBackend` for
details to implement a new backend.
"""
from importlib import import_module
class DistributedBackend:
"""An abstract backend class for distributed deep learning.
Provides several standard utility methods under a common API.
Variables that must be overridden:
- BACKEND_MODULE_NAME
- BACKEND_NAME
Methods that must be overridden:
- wrap_arg_parser
- _initialize
- _get_world_size
- _get_rank
- _get_local_rank
- _local_barrier
- _distribute
- _average_all
"""
BACKEND_MODULE_NAME = None
"""Name of the module to import for the backend."""
BACKEND_NAME = None
"""Name of the backend for printing."""
ROOT_RANK = 0
backend_module = None
"""The module to access the backend."""
is_initialized = False
"""Whether the backend is initialized."""
def __init__(self):
if self.BACKEND_MODULE_NAME is None:
raise NotImplementedError('BACKEND_MODULE_NAME is not set')
if self.BACKEND_NAME is None:
raise NotImplementedError('BACKEND_NAME is not set')
def has_backend(self):
"""Return whether the backend module is now imported."""
try:
self.backend_module = import_module(self.BACKEND_MODULE_NAME)
except ModuleNotFoundError:
return False
return True
def check_batch_size(self, batch_size):
"""Check whether the batch size makes sense for distribution."""
assert batch_size >= self.get_world_size(), \
(f"batch size can't be smaller than number of processes "
f'({batch_size} < {self.get_world_size()})')
def wrap_arg_parser(self, parser):
"""Add arguments to support optional distributed backend usage."""
raise NotImplementedError
def initialize(self):
"""Initialize the distributed backend."""
self._initialize()
self.is_initialized = True
def _initialize(self):
"""Initialize the distributed backend."""
raise NotImplementedError
def require_init(self):
"""Raise an error when the backend has not been initialized yet."""
assert self.is_initialized, \
(f'{BACKEND_NAME} backend has not been initialized; please call '
f'`distributed_utils.initialize` at the start of your script to '
f'allow optional distributed usage')
def get_world_size(self):
"""Return the amount of distributed processes."""
self.require_init()
return self._get_world_size()
def _get_world_size(self):
"""Return the amount of distributed processes."""
raise NotImplementedError
def get_rank(self):
"""Return the global rank of the calling worker process."""
self.require_init()
return self._get_rank()
def _get_rank(self):
"""Return the global rank of the calling worker process."""
raise NotImplementedError
def get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
self.require_init()
return self._get_local_rank()
def _get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
raise NotImplementedError
def is_root_worker(self):
"""Return whether the calling worker has the root rank."""
return self.get_rank() == self.ROOT_RANK
def is_local_root_worker(self):
"""Return whether the calling worker has the root rank on this node."""
return self.get_local_rank() == self.ROOT_RANK
def local_barrier(self):
"""Wait until all processes on this node have called this function."""
self.require_init()
self._local_barrier()
def _local_barrier(self):
"""Wait until all processes on this node have called this function."""
raise NotImplementedError
def distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
self.require_init()
return self._distribute(
args,
model,
optimizer,
model_parameters,
training_data,
lr_scheduler,
**kwargs,
)
def _distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
raise NotImplementedError
def average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
self.require_init()
return self._average_all(tensor)
def _average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
raise NotImplementedError
| afiaka87 | 7eb2e34ac07076a5bc99808b38795bb12e285f26 | 5a255eab032bcd821c2038c808b9682e485b3f1a | any reason for this change? | rom1504 | 6 |
lucidrains/DALLE-pytorch | 320 | stable_softmax, wanb_entity, visible discord, replace buggy colab | edit: alright rom1504 is being awesome and implementing things the proper modular way for us. I'm gonna focus this PR on a few outstanding issues
> Seems the CompVis team hasn't updated their PyPi because their latest `pip` wheel still doesn't contain the necessary `GumbelVQ` class. I've had to install this as a submodule to taming-transformers to get it to work which doesnt feel quite right. | null | 2021-06-26 19:35:03+00:00 | 2021-06-30 17:15:24+00:00 | dalle_pytorch/distributed_backends/distributed_backend.py | """
An abstract backend for distributed deep learning.
Provides several standard utility methods under a common API.
Please check the documentation of the class `DistributedBackend` for
details to implement a new backend.
"""
from importlib import import_module
class DistributedBackend:
"""An abstract backend class for distributed deep learning.
Provides several standard utility methods under a common API.
Variables that must be overridden:
- BACKEND_MODULE_NAME
- BACKEND_NAME
Methods that must be overridden:
- wrap_arg_parser
- _initialize
- _get_world_size
- _get_rank
- _get_local_rank
- _local_barrier
- _distribute
- _average_all
"""
BACKEND_MODULE_NAME = None
"""Name of the module to import for the backend."""
BACKEND_NAME = None
"""Name of the backend for printing."""
ROOT_RANK = 0
backend_module = None
"""The module to access the backend."""
is_initialized = False
"""Whether the backend is initialized."""
def __init__(self):
if self.BACKEND_MODULE_NAME is None:
raise NotImplementedError('BACKEND_MODULE_NAME is not set')
if self.BACKEND_NAME is None:
raise NotImplementedError('BACKEND_NAME is not set')
def has_backend(self):
"""Return whether the backend module is now imported."""
try:
self.backend_module = import_module(self.BACKEND_MODULE_NAME)
except ModuleNotFoundError:
return False
return True
def check_batch_size(self, batch_size):
"""Check whether the batch size makes sense for distribution."""
assert batch_size >= self.get_world_size(), \
(f"batch size can't be smaller than number of processes "
f'({batch_size} < {self.get_world_size()})')
def wrap_arg_parser(parser):
"""Add arguments to support optional distributed backend usage."""
raise NotImplementedError
def initialize(self):
"""Initialize the distributed backend."""
self._initialize()
self.is_initialized = True
def _initialize(self):
"""Initialize the distributed backend."""
raise NotImplementedError
def require_init(self):
"""Raise an error when the backend has not been initialized yet."""
assert self.is_initialized, \
(f'{BACKEND_NAME} backend has not been initialized; please call '
f'`distributed_utils.initialize` at the start of your script to '
f'allow optional distributed usage')
def get_world_size(self):
"""Return the amount of distributed processes."""
self.require_init()
return self._get_world_size()
def _get_world_size(self):
"""Return the amount of distributed processes."""
raise NotImplementedError
def get_rank(self):
"""Return the global rank of the calling worker process."""
self.require_init()
return self._get_rank()
def _get_rank(self):
"""Return the global rank of the calling worker process."""
raise NotImplementedError
def get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
self.require_init()
return self._get_local_rank()
def _get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
raise NotImplementedError
def is_root_worker(self):
"""Return whether the calling worker has the root rank."""
return self.get_rank() == self.ROOT_RANK
def is_local_root_worker(self):
"""Return whether the calling worker has the root rank on this node."""
return self.get_local_rank() == self.ROOT_RANK
def local_barrier(self):
"""Wait until all processes on this node have called this function."""
self.require_init()
self._local_barrier()
def _local_barrier(self):
"""Wait until all processes on this node have called this function."""
raise NotImplementedError
def distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
self.require_init()
return self._distribute(
args,
model,
optimizer,
model_parameters,
training_data,
lr_scheduler,
**kwargs,
)
def _distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
raise NotImplementedError
def average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
self.require_init()
return self._average_all(tensor)
def _average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
raise NotImplementedError
| """
An abstract backend for distributed deep learning.
Provides several standard utility methods under a common API.
Please check the documentation of the class `DistributedBackend` for
details to implement a new backend.
"""
from importlib import import_module
class DistributedBackend:
"""An abstract backend class for distributed deep learning.
Provides several standard utility methods under a common API.
Variables that must be overridden:
- BACKEND_MODULE_NAME
- BACKEND_NAME
Methods that must be overridden:
- wrap_arg_parser
- _initialize
- _get_world_size
- _get_rank
- _get_local_rank
- _local_barrier
- _distribute
- _average_all
"""
BACKEND_MODULE_NAME = None
"""Name of the module to import for the backend."""
BACKEND_NAME = None
"""Name of the backend for printing."""
ROOT_RANK = 0
backend_module = None
"""The module to access the backend."""
is_initialized = False
"""Whether the backend is initialized."""
def __init__(self):
if self.BACKEND_MODULE_NAME is None:
raise NotImplementedError('BACKEND_MODULE_NAME is not set')
if self.BACKEND_NAME is None:
raise NotImplementedError('BACKEND_NAME is not set')
def has_backend(self):
"""Return whether the backend module is now imported."""
try:
self.backend_module = import_module(self.BACKEND_MODULE_NAME)
except ModuleNotFoundError:
return False
return True
def check_batch_size(self, batch_size):
"""Check whether the batch size makes sense for distribution."""
assert batch_size >= self.get_world_size(), \
(f"batch size can't be smaller than number of processes "
f'({batch_size} < {self.get_world_size()})')
def wrap_arg_parser(self, parser):
"""Add arguments to support optional distributed backend usage."""
raise NotImplementedError
def initialize(self):
"""Initialize the distributed backend."""
self._initialize()
self.is_initialized = True
def _initialize(self):
"""Initialize the distributed backend."""
raise NotImplementedError
def require_init(self):
"""Raise an error when the backend has not been initialized yet."""
assert self.is_initialized, \
(f'{BACKEND_NAME} backend has not been initialized; please call '
f'`distributed_utils.initialize` at the start of your script to '
f'allow optional distributed usage')
def get_world_size(self):
"""Return the amount of distributed processes."""
self.require_init()
return self._get_world_size()
def _get_world_size(self):
"""Return the amount of distributed processes."""
raise NotImplementedError
def get_rank(self):
"""Return the global rank of the calling worker process."""
self.require_init()
return self._get_rank()
def _get_rank(self):
"""Return the global rank of the calling worker process."""
raise NotImplementedError
def get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
self.require_init()
return self._get_local_rank()
def _get_local_rank(self):
"""Return the local rank of the calling worker process.
The local rank is the rank based on a single node's processes.
"""
raise NotImplementedError
def is_root_worker(self):
"""Return whether the calling worker has the root rank."""
return self.get_rank() == self.ROOT_RANK
def is_local_root_worker(self):
"""Return whether the calling worker has the root rank on this node."""
return self.get_local_rank() == self.ROOT_RANK
def local_barrier(self):
"""Wait until all processes on this node have called this function."""
self.require_init()
self._local_barrier()
def _local_barrier(self):
"""Wait until all processes on this node have called this function."""
raise NotImplementedError
def distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
self.require_init()
return self._distribute(
args,
model,
optimizer,
model_parameters,
training_data,
lr_scheduler,
**kwargs,
)
def _distribute(
self,
args=None,
model=None,
optimizer=None,
model_parameters=None,
training_data=None,
lr_scheduler=None,
**kwargs,
):
"""Return a distributed model engine, optimizer, dataloader, and
learning rate scheduler. These are obtained by wrapping the
given values with the backend.
"""
raise NotImplementedError
def average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
self.require_init()
return self._average_all(tensor)
def _average_all(self, tensor):
"""Return the average of `tensor` over all workers."""
raise NotImplementedError
| afiaka87 | 7eb2e34ac07076a5bc99808b38795bb12e285f26 | 5a255eab032bcd821c2038c808b9682e485b3f1a | got it, this is just the abstract class and this is done in all inherited classes | rom1504 | 7 |
lucidrains/DALLE-pytorch | 302 | Expose flops_profiler, attn_dropout, ff_dropout | null | null | 2021-06-13 12:41:35+00:00 | 2021-06-15 15:53:18+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | dc147ca0acaa487950d58935aaf157d1763b1d7d | cec0797e8c114f9c8947b4bb4de710720bbc8359 | Woops. ;) | janEbert | 8 |
lucidrains/DALLE-pytorch | 302 | Expose flops_profiler, attn_dropout, ff_dropout | null | null | 2021-06-13 12:41:35+00:00 | 2021-06-15 15:53:18+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | dc147ca0acaa487950d58935aaf157d1763b1d7d | cec0797e8c114f9c8947b4bb4de710720bbc8359 | Could it be that this does not work due to the exception you raise? Just an intuitive guess; maybe `print([...]); return` helps? | janEbert | 9 |
lucidrains/DALLE-pytorch | 302 | Expose flops_profiler, attn_dropout, ff_dropout | null | null | 2021-06-13 12:41:35+00:00 | 2021-06-15 15:53:18+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | dc147ca0acaa487950d58935aaf157d1763b1d7d | cec0797e8c114f9c8947b4bb4de710720bbc8359 | Good idea! Forgot about this thanks; Definitely the better method compared to parsing the scroll back. | afiaka87 | 10 |
lucidrains/DALLE-pytorch | 296 | Save/Resume optimizer state, scheduler state, and epoch | Save/Resume optimizer state, scheduler state, and epoch
Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
| null | 2021-06-12 08:23:32+00:00 | 2021-06-16 01:13:10+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| mehdidc | 50fb9711cdbf0af0aac823ff9770f86937bdff9c | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | Nitpicking - could we get a space before that `=` sign there? | afiaka87 | 11 |
lucidrains/DALLE-pytorch | 296 | Save/Resume optimizer state, scheduler state, and epoch | Save/Resume optimizer state, scheduler state, and epoch
Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
| null | 2021-06-12 08:23:32+00:00 | 2021-06-16 01:13:10+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| mehdidc | 50fb9711cdbf0af0aac823ff9770f86937bdff9c | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | Oh wow this has bit me so many times on resume! Finally. | afiaka87 | 12 |
lucidrains/DALLE-pytorch | 296 | Save/Resume optimizer state, scheduler state, and epoch | Save/Resume optimizer state, scheduler state, and epoch
Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
| null | 2021-06-12 08:23:32+00:00 | 2021-06-16 01:13:10+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| mehdidc | 50fb9711cdbf0af0aac823ff9770f86937bdff9c | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | Till now I've had to use the DeepSpeed native optimizers to resume state; awesome. | afiaka87 | 13 |
lucidrains/DALLE-pytorch | 296 | Save/Resume optimizer state, scheduler state, and epoch | Save/Resume optimizer state, scheduler state, and epoch
Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
| null | 2021-06-12 08:23:32+00:00 | 2021-06-16 01:13:10+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| mehdidc | 50fb9711cdbf0af0aac823ff9770f86937bdff9c | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | Thanks, fixed | mehdidc | 14 |
lucidrains/DALLE-pytorch | 293 | add vqgan_model_path and vqgan_config_path parameters for custom vqgan support | load the vqgan model from the provided path and config when not None
This is still a draft:
* I need to run it to check it works
* This feels kind of awkward with the vae_params support directly stored in dalle.pt: where should we go ?
Should we move to storing all kind of vae weights in dalle.pt ? or should we remove the vae weight support and always pass both the vae path and dalle path when resuming/generating ?
Would be glad to have your opinions on this @afiaka87 @robvanvolt @lucidrains
I think this is a feature we need because several people tried to do training with custom VQGAN and got confused as to whether they used the old or new vqgan weights | null | 2021-06-11 09:53:45+00:00 | 2021-06-15 20:44:54+00:00 | dalle_pytorch/vae.py | import io
import sys
import os, sys
import requests
import PIL
import warnings
import os
import hashlib
import urllib
import yaml
from pathlib import Path
from tqdm import tqdm
from math import sqrt
from omegaconf import OmegaConf
from taming.models.vqgan import VQModel
import torch
from torch import nn
import torch.nn.functional as F
from einops import rearrange
from dalle_pytorch import distributed_utils
# constants
CACHE_PATH = os.path.expanduser("~/.cache/dalle")
OPENAI_VAE_ENCODER_PATH = 'https://cdn.openai.com/dall-e/encoder.pkl'
OPENAI_VAE_DECODER_PATH = 'https://cdn.openai.com/dall-e/decoder.pkl'
VQGAN_VAE_PATH = 'https://heibox.uni-heidelberg.de/f/140747ba53464f49b476/?dl=1'
VQGAN_VAE_CONFIG_PATH = 'https://heibox.uni-heidelberg.de/f/6ecf2af6c658432c8298/?dl=1'
# helpers methods
def exists(val):
return val is not None
def default(val, d):
return val if exists(val) else d
def load_model(path):
with open(path, 'rb') as f:
return torch.load(f, map_location = torch.device('cpu'))
def map_pixels(x, eps = 0.1):
return (1 - 2 * eps) * x + eps
def unmap_pixels(x, eps = 0.1):
return torch.clamp((x - eps) / (1 - 2 * eps), 0, 1)
def download(url, filename = None, root = CACHE_PATH):
if (
not distributed_utils.is_distributed
or distributed_utils.backend.is_local_root_worker()
):
os.makedirs(root, exist_ok = True)
filename = default(filename, os.path.basename(url))
download_target = os.path.join(root, filename)
download_target_tmp = os.path.join(root, f'tmp.{filename}')
if os.path.exists(download_target) and not os.path.isfile(download_target):
raise RuntimeError(f"{download_target} exists and is not a regular file")
if (
distributed_utils.is_distributed
and not distributed_utils.backend.is_local_root_worker()
and not os.path.isfile(download_target)
):
# If the file doesn't exist yet, wait until it's downloaded by the root worker.
distributed_utils.backend.local_barrier()
if os.path.isfile(download_target):
return download_target
with urllib.request.urlopen(url) as source, open(download_target_tmp, "wb") as output:
with tqdm(total=int(source.info().get("Content-Length")), ncols=80) as loop:
while True:
buffer = source.read(8192)
if not buffer:
break
output.write(buffer)
loop.update(len(buffer))
os.rename(download_target_tmp, download_target)
if (
distributed_utils.is_distributed
and distributed_utils.backend.is_local_root_worker()
):
distributed_utils.backend.local_barrier()
return download_target
# pretrained Discrete VAE from OpenAI
class OpenAIDiscreteVAE(nn.Module):
def __init__(self):
super().__init__()
self.enc = load_model(download(OPENAI_VAE_ENCODER_PATH))
self.dec = load_model(download(OPENAI_VAE_DECODER_PATH))
self.num_layers = 3
self.image_size = 256
self.num_tokens = 8192
@torch.no_grad()
def get_codebook_indices(self, img):
img = map_pixels(img)
z_logits = self.enc.blocks(img)
z = torch.argmax(z_logits, dim = 1)
return rearrange(z, 'b h w -> b (h w)')
def decode(self, img_seq):
b, n = img_seq.shape
img_seq = rearrange(img_seq, 'b (h w) -> b h w', h = int(sqrt(n)))
z = F.one_hot(img_seq, num_classes = self.num_tokens)
z = rearrange(z, 'b h w c -> b c h w').float()
x_stats = self.dec(z).float()
x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
return x_rec
def forward(self, img):
raise NotImplemented
# VQGAN from Taming Transformers paper
# https://arxiv.org/abs/2012.09841
class VQGanVAE1024(nn.Module):
def __init__(self):
super().__init__()
model_filename = 'vqgan.1024.model.ckpt'
config_filename = 'vqgan.1024.config.yml'
download(VQGAN_VAE_CONFIG_PATH, config_filename)
download(VQGAN_VAE_PATH, model_filename)
config = OmegaConf.load(str(Path(CACHE_PATH) / config_filename))
model = VQModel(**config.model.params)
state = torch.load(str(Path(CACHE_PATH) / model_filename), map_location = 'cpu')['state_dict']
model.load_state_dict(state, strict = False)
self.model = model
self.num_layers = 4
self.image_size = 256
self.num_tokens = 1024
self._register_external_parameters()
def _register_external_parameters(self):
"""Register external parameters for DeepSpeed partitioning."""
if (
not distributed_utils.is_distributed
or not distributed_utils.using_backend(
distributed_utils.DeepSpeedBackend)
):
return
deepspeed = distributed_utils.backend.backend_module
deepspeed.zero.register_external_parameter(
self, self.model.quantize.embedding.weight)
@torch.no_grad()
def get_codebook_indices(self, img):
b = img.shape[0]
img = (2 * img) - 1
_, _, [_, _, indices] = self.model.encode(img)
return rearrange(indices, '(b n) () -> b n', b = b)
def decode(self, img_seq):
b, n = img_seq.shape
one_hot_indices = F.one_hot(img_seq, num_classes = self.num_tokens).float()
z = (one_hot_indices @ self.model.quantize.embedding.weight)
z = rearrange(z, 'b (h w) c -> b c h w', h = int(sqrt(n)))
img = self.model.decode(z)
img = (img.clamp(-1., 1.) + 1) * 0.5
return img
def forward(self, img):
raise NotImplemented
| import io
import sys
import os, sys
import requests
import PIL
import warnings
import os
import hashlib
import urllib
import yaml
from pathlib import Path
from tqdm import tqdm
from math import sqrt, log
from omegaconf import OmegaConf
from taming.models.vqgan import VQModel
import torch
from torch import nn
import torch.nn.functional as F
from einops import rearrange
from dalle_pytorch import distributed_utils
# constants
CACHE_PATH = os.path.expanduser("~/.cache/dalle")
OPENAI_VAE_ENCODER_PATH = 'https://cdn.openai.com/dall-e/encoder.pkl'
OPENAI_VAE_DECODER_PATH = 'https://cdn.openai.com/dall-e/decoder.pkl'
VQGAN_VAE_PATH = 'https://heibox.uni-heidelberg.de/f/140747ba53464f49b476/?dl=1'
VQGAN_VAE_CONFIG_PATH = 'https://heibox.uni-heidelberg.de/f/6ecf2af6c658432c8298/?dl=1'
# helpers methods
def exists(val):
return val is not None
def default(val, d):
return val if exists(val) else d
def load_model(path):
with open(path, 'rb') as f:
return torch.load(f, map_location = torch.device('cpu'))
def map_pixels(x, eps = 0.1):
return (1 - 2 * eps) * x + eps
def unmap_pixels(x, eps = 0.1):
return torch.clamp((x - eps) / (1 - 2 * eps), 0, 1)
def download(url, filename = None, root = CACHE_PATH):
if (
not distributed_utils.is_distributed
or distributed_utils.backend.is_local_root_worker()
):
os.makedirs(root, exist_ok = True)
filename = default(filename, os.path.basename(url))
download_target = os.path.join(root, filename)
download_target_tmp = os.path.join(root, f'tmp.{filename}')
if os.path.exists(download_target) and not os.path.isfile(download_target):
raise RuntimeError(f"{download_target} exists and is not a regular file")
if (
distributed_utils.is_distributed
and not distributed_utils.backend.is_local_root_worker()
and not os.path.isfile(download_target)
):
# If the file doesn't exist yet, wait until it's downloaded by the root worker.
distributed_utils.backend.local_barrier()
if os.path.isfile(download_target):
return download_target
with urllib.request.urlopen(url) as source, open(download_target_tmp, "wb") as output:
with tqdm(total=int(source.info().get("Content-Length")), ncols=80) as loop:
while True:
buffer = source.read(8192)
if not buffer:
break
output.write(buffer)
loop.update(len(buffer))
os.rename(download_target_tmp, download_target)
if (
distributed_utils.is_distributed
and distributed_utils.backend.is_local_root_worker()
):
distributed_utils.backend.local_barrier()
return download_target
# pretrained Discrete VAE from OpenAI
class OpenAIDiscreteVAE(nn.Module):
def __init__(self):
super().__init__()
self.enc = load_model(download(OPENAI_VAE_ENCODER_PATH))
self.dec = load_model(download(OPENAI_VAE_DECODER_PATH))
self.num_layers = 3
self.image_size = 256
self.num_tokens = 8192
@torch.no_grad()
def get_codebook_indices(self, img):
img = map_pixels(img)
z_logits = self.enc.blocks(img)
z = torch.argmax(z_logits, dim = 1)
return rearrange(z, 'b h w -> b (h w)')
def decode(self, img_seq):
b, n = img_seq.shape
img_seq = rearrange(img_seq, 'b (h w) -> b h w', h = int(sqrt(n)))
z = F.one_hot(img_seq, num_classes = self.num_tokens)
z = rearrange(z, 'b h w c -> b c h w').float()
x_stats = self.dec(z).float()
x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
return x_rec
def forward(self, img):
raise NotImplemented
# VQGAN from Taming Transformers paper
# https://arxiv.org/abs/2012.09841
class VQGanVAE(nn.Module):
def __init__(self, vqgan_model_path, vqgan_config_path):
super().__init__()
if vqgan_model_path is None:
model_filename = 'vqgan.1024.model.ckpt'
config_filename = 'vqgan.1024.config.yml'
download(VQGAN_VAE_CONFIG_PATH, config_filename)
download(VQGAN_VAE_PATH, model_filename)
config_path = str(Path(CACHE_PATH) / config_filename)
model_path = str(Path(CACHE_PATH) / model_filename)
else:
model_path = vqgan_model_path
config_path = vqgan_config_path
config = OmegaConf.load(config_path)
model = VQModel(**config.model.params)
state = torch.load(model_path, map_location = 'cpu')['state_dict']
model.load_state_dict(state, strict = False)
print(f"Loaded VQGAN from {model_path} and {config_path}")
self.model = model
self.num_layers = int(log(config.model.params.ddconfig.attn_resolutions[0])/log(2))
self.image_size = 256
self.num_tokens = config.model.params.n_embed
self._register_external_parameters()
def _register_external_parameters(self):
"""Register external parameters for DeepSpeed partitioning."""
if (
not distributed_utils.is_distributed
or not distributed_utils.using_backend(
distributed_utils.DeepSpeedBackend)
):
return
deepspeed = distributed_utils.backend.backend_module
deepspeed.zero.register_external_parameter(
self, self.model.quantize.embedding.weight)
@torch.no_grad()
def get_codebook_indices(self, img):
b = img.shape[0]
img = (2 * img) - 1
_, _, [_, _, indices] = self.model.encode(img)
return rearrange(indices, '(b n) () -> b n', b = b)
def decode(self, img_seq):
b, n = img_seq.shape
one_hot_indices = F.one_hot(img_seq, num_classes = self.num_tokens).float()
z = (one_hot_indices @ self.model.quantize.embedding.weight)
z = rearrange(z, 'b (h w) c -> b c h w', h = int(sqrt(n)))
img = self.model.decode(z)
img = (img.clamp(-1., 1.) + 1) * 0.5
return img
def forward(self, img):
raise NotImplemented
| rom1504 | cec0797e8c114f9c8947b4bb4de710720bbc8359 | 50fb9711cdbf0af0aac823ff9770f86937bdff9c | I chose to kept image size hardcoded because adding image_size parameter would be a larger change. could be done in another pr | rom1504 | 15 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | this is a small unrelated change, which I found useful as if you start multiple runs from the same folder wandb gets confused and "resume" from any of the currently running runs
I can revert the change if you think that's not good | rom1504 | 16 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | I've had mixed success with this; sometimes it is in fact able to resume but even in that case, it has lost the state regarding the current epoch (which should perhaps be saved in the dalle.pt checkpoint?
At any rate - I don't mind this change but I think the proper way to go about things would be to retrieve the generated name from the run, save it somehow; maybe just a hidden dotfile with the name on the first line? And then try to pass it back in on resume.
Probably better to tackle this in another PR I would think. @lucidrains @janEbert what do you think? | afiaka87 | 17 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | Could we put something like "destructive" or "warning" in the help for this? | afiaka87 | 18 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | "(Careful) Deletes old deepspeed checkpoints if there are more than n" perhaps? | afiaka87 | 19 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | Hmm, do we know if the `convert_to_fp32.py` file changes the date modified of the checkpoint? Because that might be an edge case where files are accidentally deleted.
Aside from that; I think as long as users know this a destructive process this should be fine. | afiaka87 | 20 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | not sure what this script does (does it even work?), but if it changes the files yeah for sure it will change the modified time. The impact would be these recent files would be kept, that seems reasonable to me. | rom1504 | 21 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | yeah that kind of "fixing the wandb resuming issue properly" fixes should be done in another PR.
Do you want me to revert this change in this PR ?
With the current code (resume set at true in wandb when resuming), bugs are occuring, so that's why I changed it here. | rom1504 | 22 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | added it | rom1504 | 23 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | Ah yeah - I forgot how time worked apparently. Not sure what I was thinking.
Anyway looks ready to go in my opinion.
@lucidrains are you available to merge this? | afiaka87 | 24 |
lucidrains/DALLE-pytorch | 285 | Add an option to keep only N deepspeed checkpoints | Very useful to avoid filling up the disk with hundred of GBs of checkpoints | null | 2021-06-05 20:11:06+00:00 | 2021-06-13 03:09:59+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument(
'--amp',
action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.'
)
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 77da3edf4e76ba614fdf8bb57c7455ede104858c | 653b5f9edca3131ec13d9148104fa899c35795ed | (lets just revert; it'll be easier to get it merged that way) | afiaka87 | 25 |
lucidrains/DALLE-pytorch | 280 | Added support for webdataset | null | null | 2021-06-01 20:58:02+00:00 | 2021-06-16 22:13:04+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# libraries needed for webdataset support
import webdataset as wds
from torchvision import transforms as T
from PIL import Image
from io import BytesIO
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument(
'--wds',
type = str,
default='',
help = 'Comma separated list of WebDataset (1) image and (2) text column names. Must contain 2 values, e.g. img,cap.'
)
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 128, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 4, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 16, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
WEBDATASET_IMAGE_TEXT_COLUMNS = tuple(args.wds.split(','))
ENABLE_WEBDATASET = True if len(WEBDATASET_IMAGE_TEXT_COLUMNS) == 2 else False
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
if not ENABLE_WEBDATASET:
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
else:
# quit early if no tar files were found
if Path(args.image_text_folder).is_dir():
DATASET = [str(p) for p in Path(args.image_text_folder).glob("**/*") if ".tar" in str(p).lower()] # .name
assert len(DATASET) > 0, 'The directory ({}) does not contain any WebDataset/.tar files.'.format(args.image_text_folder)
print('Found {} WebDataset .tar(.gz) file(s) under given path {}!'.format(len(DATASET), args.image_text_folder))
elif ('http://' in args.image_text_folder.lower()) | ('https://' in args.image_text_folder.lower()):
DATASET = f"pipe:curl -L -s {args.image_text_folder} || true"
print('Found {} http(s) link under given path!'.format(len(DATASET), args.image_text_folder))
elif 'gs://' in args.image_text_folder.lower():
DATASET = f"pipe:gsutil cat {args.image_text_folder} || true"
print('Found {} GCS link under given path!'.format(len(DATASET), args.image_text_folder))
elif '.tar' in args.image_text_folder:
DATASET = args.image_text_folder
print('Found WebDataset .tar(.gz) file under given path {}!'.format(args.image_text_folder))
else:
raise Exception('No folder, no .tar(.gz) and no url pointing to tar files provided under {}.'.format(args.image_text_folder))
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
imagepreproc = T.Compose([
T.Lambda(lambda img: img.convert('RGB')
if img.mode != 'RGB' else img),
T.RandomResizedCrop(IMAGE_SIZE,
scale=(args.resize_ratio, 1.),
ratio=(1., 1.)),
T.ToTensor(),
])
def imagetransform(b):
return Image.open(BytesIO(b))
def tokenize(s):
return tokenizer.tokenize(
s.decode('utf-8'),
TEXT_SEQ_LEN,
truncate_text=args.truncate_captions).squeeze(0)
if ENABLE_WEBDATASET:
DATASET_SIZE = int(1e9) # You need to set a nominal length for the Dataset in order to avoid warnings from DataLoader
myimg, mycap = WEBDATASET_IMAGE_TEXT_COLUMNS
image_text_mapping = {
myimg: imagetransform,
mycap: tokenize
}
image_mapping = {
myimg: imagepreproc
}
num_batches = DATASET_SIZE // BATCH_SIZE
ds = (
wds.WebDataset(DATASET, length=num_batches)
# .shuffle(is_shuffle) # Commented out for WebDataset as the behaviour cannot be predicted yet
.map_dict(**image_text_mapping)
.map_dict(**image_mapping)
.to_tuple(mycap, myimg)
.batched(BATCH_SIZE, partial=False) # It is good to avoid partial batches when using Distributed training
)
else:
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
if not ENABLE_WEBDATASET:
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
if ENABLE_WEBDATASET:
# WebLoader for WebDataset and DeepSpeed compatibility
dl = wds.WebLoader(ds, batch_size=None, shuffle=False) # optionally add num_workers=2 (n) argument
number_of_batches = DATASET_SIZE // (BATCH_SIZE * distr_backend.get_world_size())
dl = dl.repeat(2).slice(number_of_batches)
dl.length = number_of_batches
else:
# Regular DataLoader for image-text-folder datasets
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate((dl if ENABLE_WEBDATASET else distr_dl)):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| robvanvolt | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | 2eceb841b4a5795a56165a941b69a30da4cad3e6 | awesome imports great job | chesse20 | 26 |
lucidrains/DALLE-pytorch | 280 | Added support for webdataset | null | null | 2021-06-01 20:58:02+00:00 | 2021-06-16 22:13:04+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# libraries needed for webdataset support
import webdataset as wds
from torchvision import transforms as T
from PIL import Image
from io import BytesIO
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument(
'--wds',
type = str,
default='',
help = 'Comma separated list of WebDataset (1) image and (2) text column names. Must contain 2 values, e.g. img,cap.'
)
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 128, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 4, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 16, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
WEBDATASET_IMAGE_TEXT_COLUMNS = tuple(args.wds.split(','))
ENABLE_WEBDATASET = True if len(WEBDATASET_IMAGE_TEXT_COLUMNS) == 2 else False
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
if not ENABLE_WEBDATASET:
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
else:
# quit early if no tar files were found
if Path(args.image_text_folder).is_dir():
DATASET = [str(p) for p in Path(args.image_text_folder).glob("**/*") if ".tar" in str(p).lower()] # .name
assert len(DATASET) > 0, 'The directory ({}) does not contain any WebDataset/.tar files.'.format(args.image_text_folder)
print('Found {} WebDataset .tar(.gz) file(s) under given path {}!'.format(len(DATASET), args.image_text_folder))
elif ('http://' in args.image_text_folder.lower()) | ('https://' in args.image_text_folder.lower()):
DATASET = f"pipe:curl -L -s {args.image_text_folder} || true"
print('Found {} http(s) link under given path!'.format(len(DATASET), args.image_text_folder))
elif 'gs://' in args.image_text_folder.lower():
DATASET = f"pipe:gsutil cat {args.image_text_folder} || true"
print('Found {} GCS link under given path!'.format(len(DATASET), args.image_text_folder))
elif '.tar' in args.image_text_folder:
DATASET = args.image_text_folder
print('Found WebDataset .tar(.gz) file under given path {}!'.format(args.image_text_folder))
else:
raise Exception('No folder, no .tar(.gz) and no url pointing to tar files provided under {}.'.format(args.image_text_folder))
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
imagepreproc = T.Compose([
T.Lambda(lambda img: img.convert('RGB')
if img.mode != 'RGB' else img),
T.RandomResizedCrop(IMAGE_SIZE,
scale=(args.resize_ratio, 1.),
ratio=(1., 1.)),
T.ToTensor(),
])
def imagetransform(b):
return Image.open(BytesIO(b))
def tokenize(s):
return tokenizer.tokenize(
s.decode('utf-8'),
TEXT_SEQ_LEN,
truncate_text=args.truncate_captions).squeeze(0)
if ENABLE_WEBDATASET:
DATASET_SIZE = int(1e9) # You need to set a nominal length for the Dataset in order to avoid warnings from DataLoader
myimg, mycap = WEBDATASET_IMAGE_TEXT_COLUMNS
image_text_mapping = {
myimg: imagetransform,
mycap: tokenize
}
image_mapping = {
myimg: imagepreproc
}
num_batches = DATASET_SIZE // BATCH_SIZE
ds = (
wds.WebDataset(DATASET, length=num_batches)
# .shuffle(is_shuffle) # Commented out for WebDataset as the behaviour cannot be predicted yet
.map_dict(**image_text_mapping)
.map_dict(**image_mapping)
.to_tuple(mycap, myimg)
.batched(BATCH_SIZE, partial=False) # It is good to avoid partial batches when using Distributed training
)
else:
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
if not ENABLE_WEBDATASET:
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
if ENABLE_WEBDATASET:
# WebLoader for WebDataset and DeepSpeed compatibility
dl = wds.WebLoader(ds, batch_size=None, shuffle=False) # optionally add num_workers=2 (n) argument
number_of_batches = DATASET_SIZE // (BATCH_SIZE * distr_backend.get_world_size())
dl = dl.repeat(2).slice(number_of_batches)
dl.length = number_of_batches
else:
# Regular DataLoader for image-text-folder datasets
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate((dl if ENABLE_WEBDATASET else distr_dl)):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| robvanvolt | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | 2eceb841b4a5795a56165a941b69a30da4cad3e6 | Adjusted the imports so they have only the necessary functions imported, thank you for your feedback!:) | robvanvolt | 27 |
lucidrains/DALLE-pytorch | 280 | Added support for webdataset | null | null | 2021-06-01 20:58:02+00:00 | 2021-06-16 22:13:04+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# libraries needed for webdataset support
import webdataset as wds
from torchvision import transforms as T
from PIL import Image
from io import BytesIO
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument(
'--wds',
type = str,
default='',
help = 'Comma separated list of WebDataset (1) image and (2) text column names. Must contain 2 values, e.g. img,cap.'
)
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 128, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 4, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 16, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
WEBDATASET_IMAGE_TEXT_COLUMNS = tuple(args.wds.split(','))
ENABLE_WEBDATASET = True if len(WEBDATASET_IMAGE_TEXT_COLUMNS) == 2 else False
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
if not ENABLE_WEBDATASET:
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
else:
# quit early if no tar files were found
if Path(args.image_text_folder).is_dir():
DATASET = [str(p) for p in Path(args.image_text_folder).glob("**/*") if ".tar" in str(p).lower()] # .name
assert len(DATASET) > 0, 'The directory ({}) does not contain any WebDataset/.tar files.'.format(args.image_text_folder)
print('Found {} WebDataset .tar(.gz) file(s) under given path {}!'.format(len(DATASET), args.image_text_folder))
elif ('http://' in args.image_text_folder.lower()) | ('https://' in args.image_text_folder.lower()):
DATASET = f"pipe:curl -L -s {args.image_text_folder} || true"
print('Found {} http(s) link under given path!'.format(len(DATASET), args.image_text_folder))
elif 'gs://' in args.image_text_folder.lower():
DATASET = f"pipe:gsutil cat {args.image_text_folder} || true"
print('Found {} GCS link under given path!'.format(len(DATASET), args.image_text_folder))
elif '.tar' in args.image_text_folder:
DATASET = args.image_text_folder
print('Found WebDataset .tar(.gz) file under given path {}!'.format(args.image_text_folder))
else:
raise Exception('No folder, no .tar(.gz) and no url pointing to tar files provided under {}.'.format(args.image_text_folder))
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
imagepreproc = T.Compose([
T.Lambda(lambda img: img.convert('RGB')
if img.mode != 'RGB' else img),
T.RandomResizedCrop(IMAGE_SIZE,
scale=(args.resize_ratio, 1.),
ratio=(1., 1.)),
T.ToTensor(),
])
def imagetransform(b):
return Image.open(BytesIO(b))
def tokenize(s):
return tokenizer.tokenize(
s.decode('utf-8'),
TEXT_SEQ_LEN,
truncate_text=args.truncate_captions).squeeze(0)
if ENABLE_WEBDATASET:
DATASET_SIZE = int(1e9) # You need to set a nominal length for the Dataset in order to avoid warnings from DataLoader
myimg, mycap = WEBDATASET_IMAGE_TEXT_COLUMNS
image_text_mapping = {
myimg: imagetransform,
mycap: tokenize
}
image_mapping = {
myimg: imagepreproc
}
num_batches = DATASET_SIZE // BATCH_SIZE
ds = (
wds.WebDataset(DATASET, length=num_batches)
# .shuffle(is_shuffle) # Commented out for WebDataset as the behaviour cannot be predicted yet
.map_dict(**image_text_mapping)
.map_dict(**image_mapping)
.to_tuple(mycap, myimg)
.batched(BATCH_SIZE, partial=False) # It is good to avoid partial batches when using Distributed training
)
else:
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
if not ENABLE_WEBDATASET:
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
if ENABLE_WEBDATASET:
# WebLoader for WebDataset and DeepSpeed compatibility
dl = wds.WebLoader(ds, batch_size=None, shuffle=False) # optionally add num_workers=2 (n) argument
number_of_batches = DATASET_SIZE // (BATCH_SIZE * distr_backend.get_world_size())
dl = dl.repeat(2).slice(number_of_batches)
dl.length = number_of_batches
else:
# Regular DataLoader for image-text-folder datasets
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate((dl if ENABLE_WEBDATASET else distr_dl)):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| robvanvolt | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | 2eceb841b4a5795a56165a941b69a30da4cad3e6 | Looks like you changed the defaults @robvanvolt
Did you mean to ?
Otherwise let's revert this part | rom1504 | 28 |
lucidrains/DALLE-pytorch | 280 | Added support for webdataset | null | null | 2021-06-01 20:58:02+00:00 | 2021-06-16 22:13:04+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# libraries needed for webdataset support
import webdataset as wds
from torchvision import transforms as T
from PIL import Image
from io import BytesIO
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument(
'--wds',
type = str,
default='',
help = 'Comma separated list of WebDataset (1) image and (2) text column names. Must contain 2 values, e.g. img,cap.'
)
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 128, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 4, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 16, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
WEBDATASET_IMAGE_TEXT_COLUMNS = tuple(args.wds.split(','))
ENABLE_WEBDATASET = True if len(WEBDATASET_IMAGE_TEXT_COLUMNS) == 2 else False
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
if not ENABLE_WEBDATASET:
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
else:
# quit early if no tar files were found
if Path(args.image_text_folder).is_dir():
DATASET = [str(p) for p in Path(args.image_text_folder).glob("**/*") if ".tar" in str(p).lower()] # .name
assert len(DATASET) > 0, 'The directory ({}) does not contain any WebDataset/.tar files.'.format(args.image_text_folder)
print('Found {} WebDataset .tar(.gz) file(s) under given path {}!'.format(len(DATASET), args.image_text_folder))
elif ('http://' in args.image_text_folder.lower()) | ('https://' in args.image_text_folder.lower()):
DATASET = f"pipe:curl -L -s {args.image_text_folder} || true"
print('Found {} http(s) link under given path!'.format(len(DATASET), args.image_text_folder))
elif 'gs://' in args.image_text_folder.lower():
DATASET = f"pipe:gsutil cat {args.image_text_folder} || true"
print('Found {} GCS link under given path!'.format(len(DATASET), args.image_text_folder))
elif '.tar' in args.image_text_folder:
DATASET = args.image_text_folder
print('Found WebDataset .tar(.gz) file under given path {}!'.format(args.image_text_folder))
else:
raise Exception('No folder, no .tar(.gz) and no url pointing to tar files provided under {}.'.format(args.image_text_folder))
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
imagepreproc = T.Compose([
T.Lambda(lambda img: img.convert('RGB')
if img.mode != 'RGB' else img),
T.RandomResizedCrop(IMAGE_SIZE,
scale=(args.resize_ratio, 1.),
ratio=(1., 1.)),
T.ToTensor(),
])
def imagetransform(b):
return Image.open(BytesIO(b))
def tokenize(s):
return tokenizer.tokenize(
s.decode('utf-8'),
TEXT_SEQ_LEN,
truncate_text=args.truncate_captions).squeeze(0)
if ENABLE_WEBDATASET:
DATASET_SIZE = int(1e9) # You need to set a nominal length for the Dataset in order to avoid warnings from DataLoader
myimg, mycap = WEBDATASET_IMAGE_TEXT_COLUMNS
image_text_mapping = {
myimg: imagetransform,
mycap: tokenize
}
image_mapping = {
myimg: imagepreproc
}
num_batches = DATASET_SIZE // BATCH_SIZE
ds = (
wds.WebDataset(DATASET, length=num_batches)
# .shuffle(is_shuffle) # Commented out for WebDataset as the behaviour cannot be predicted yet
.map_dict(**image_text_mapping)
.map_dict(**image_mapping)
.to_tuple(mycap, myimg)
.batched(BATCH_SIZE, partial=False) # It is good to avoid partial batches when using Distributed training
)
else:
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
if not ENABLE_WEBDATASET:
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
if ENABLE_WEBDATASET:
# WebLoader for WebDataset and DeepSpeed compatibility
dl = wds.WebLoader(ds, batch_size=None, shuffle=False) # optionally add num_workers=2 (n) argument
number_of_batches = DATASET_SIZE // (BATCH_SIZE * distr_backend.get_world_size())
dl = dl.repeat(2).slice(number_of_batches)
dl.length = number_of_batches
else:
# Regular DataLoader for image-text-folder datasets
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate((dl if ENABLE_WEBDATASET else distr_dl)):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| robvanvolt | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | 2eceb841b4a5795a56165a941b69a30da4cad3e6 | #311 | rom1504 | 29 |
lucidrains/DALLE-pytorch | 280 | Added support for webdataset | null | null | 2021-06-01 20:58:02+00:00 | 2021-06-16 22:13:04+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
from glob import glob
import os
import shutil
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# libraries needed for webdataset support
import webdataset as wds
from torchvision import transforms as T
from PIL import Image
from io import BytesIO
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--vqgan_model_path', type=str, default = None,
help='path to your trained VQGAN weights. This should be a .ckpt file. (only valid when taming option is enabled)')
parser.add_argument('--vqgan_config_path', type=str, default = None,
help='path to your trained VQGAN config. This should be a .yaml file. (only valid when taming option is enabled)')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument(
'--wds',
type = str,
default='',
help = 'Comma separated list of WebDataset (1) image and (2) text column names. Must contain 2 values, e.g. img,cap.'
)
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--amp', action='store_true',
help='Apex "O1" automatic mixed precision. More stable than 16 bit precision. Can\'t be used in conjunction with deepspeed zero stages 1-3.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--flops_profiler', dest = 'flops_profiler', action='store_true', help = 'Exits after printing detailed flops/runtime analysis of forward/backward')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--keep_n_checkpoints', default = None, type = int, help = '(Careful) Deletes old deepspeed checkpoints if there are more than n')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--ga_steps', default = 1, type = int, help = 'Number of steps to accumulate gradients across per each iteration. DeepSpeed only.')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 128, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 4, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 16, type = int, help = 'Model head dimension')
train_group.add_argument('--ff_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
train_group.add_argument('--attn_dropout', default = 0.0, type = float, help = 'Feed forward dropout.')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
WEBDATASET_IMAGE_TEXT_COLUMNS = tuple(args.wds.split(','))
ENABLE_WEBDATASET = True if len(WEBDATASET_IMAGE_TEXT_COLUMNS) == 2 else False
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
VQGAN_MODEL_PATH = args.vqgan_model_path
VQGAN_CONFIG_PATH = args.vqgan_config_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
KEEP_N_CHECKPOINTS = args.keep_n_checkpoints
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
FF_DROPOUT = args.ff_dropout
ATTN_DROPOUT = args.attn_dropout
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
if not ENABLE_WEBDATASET:
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
else:
# quit early if no tar files were found
if Path(args.image_text_folder).is_dir():
DATASET = [str(p) for p in Path(args.image_text_folder).glob("**/*") if ".tar" in str(p).lower()] # .name
assert len(DATASET) > 0, 'The directory ({}) does not contain any WebDataset/.tar files.'.format(args.image_text_folder)
print('Found {} WebDataset .tar(.gz) file(s) under given path {}!'.format(len(DATASET), args.image_text_folder))
elif ('http://' in args.image_text_folder.lower()) | ('https://' in args.image_text_folder.lower()):
DATASET = f"pipe:curl -L -s {args.image_text_folder} || true"
print('Found {} http(s) link under given path!'.format(len(DATASET), args.image_text_folder))
elif 'gs://' in args.image_text_folder.lower():
DATASET = f"pipe:gsutil cat {args.image_text_folder} || true"
print('Found {} GCS link under given path!'.format(len(DATASET), args.image_text_folder))
elif '.tar' in args.image_text_folder:
DATASET = args.image_text_folder
print('Found WebDataset .tar(.gz) file under given path {}!'.format(args.image_text_folder))
else:
raise Exception('No folder, no .tar(.gz) and no url pointing to tar files provided under {}.'.format(args.image_text_folder))
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
opt_state = loaded_obj.get('opt_state')
scheduler_state = loaded_obj.get('scheduler_state')
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
resume_epoch = loaded_obj.get('epoch', 0)
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
if args.taming:
vae = VQGanVAE(VQGAN_MODEL_PATH, VQGAN_CONFIG_PATH)
else:
vae = OpenAIDiscreteVAE()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
ff_dropout=FF_DROPOUT,
attn_dropout=ATTN_DROPOUT,
)
resume_epoch = 0
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
imagepreproc = T.Compose([
T.Lambda(lambda img: img.convert('RGB')
if img.mode != 'RGB' else img),
T.RandomResizedCrop(IMAGE_SIZE,
scale=(args.resize_ratio, 1.),
ratio=(1., 1.)),
T.ToTensor(),
])
def imagetransform(b):
return Image.open(BytesIO(b))
def tokenize(s):
return tokenizer.tokenize(
s.decode('utf-8'),
TEXT_SEQ_LEN,
truncate_text=args.truncate_captions).squeeze(0)
if ENABLE_WEBDATASET:
DATASET_SIZE = int(1e9) # You need to set a nominal length for the Dataset in order to avoid warnings from DataLoader
myimg, mycap = WEBDATASET_IMAGE_TEXT_COLUMNS
image_text_mapping = {
myimg: imagetransform,
mycap: tokenize
}
image_mapping = {
myimg: imagepreproc
}
num_batches = DATASET_SIZE // BATCH_SIZE
ds = (
wds.WebDataset(DATASET, length=num_batches)
# .shuffle(is_shuffle) # Commented out for WebDataset as the behaviour cannot be predicted yet
.map_dict(**image_text_mapping)
.map_dict(**image_mapping)
.to_tuple(mycap, myimg)
.batched(BATCH_SIZE, partial=False) # It is good to avoid partial batches when using Distributed training
)
else:
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
if not ENABLE_WEBDATASET:
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
if ENABLE_WEBDATASET:
# WebLoader for WebDataset and DeepSpeed compatibility
dl = wds.WebLoader(ds, batch_size=None, shuffle=False) # optionally add num_workers=2 (n) argument
number_of_batches = DATASET_SIZE // (BATCH_SIZE * distr_backend.get_world_size())
dl = dl.repeat(2).slice(number_of_batches)
dl.length = number_of_batches
else:
# Regular DataLoader for image-text-folder datasets
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if RESUME and opt_state:
opt.load_state_dict(opt_state)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if RESUME and scheduler_state:
scheduler.load_state_dict(scheduler_state)
else:
scheduler = None
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=False,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_accumulation_steps': args.ga_steps,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
'amp': {
'enabled': args.amp,
'opt_level': 'O1',
},
"flops_profiler": {
"enabled": args.flops_profiler,
"profile_step": 200,
"module_depth": -1,
"top_modules": 1,
"detailed": True,
"output_file": None # TODO Can't get this to work.
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
# Do not pass the LR scheduler to DeepSpeed so we can manually
# advance it.
lr_scheduler=scheduler if LR_DECAY and not using_deepspeed else None,
config_params=deepspeed_config,
)
# Prefer scheduler in `deepspeed_config`.
if LR_DECAY and distr_scheduler is None:
distr_scheduler = scheduler
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path, epoch=0):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'epoch': epoch,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
if KEEP_N_CHECKPOINTS is not None and distr_backend.is_root_worker():
checkpoints = sorted(glob(str(cp_dir / "global*")), key=os.path.getmtime, reverse=True)
for checkpoint in checkpoints[KEEP_N_CHECKPOINTS:]:
shutil.rmtree(checkpoint)
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict(),
'opt_state': opt.state_dict(),
}
save_obj['scheduler_state'] = (scheduler.state_dict() if scheduler else None)
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME, epoch=resume_epoch)
for epoch in range(resume_epoch, EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate((dl if ENABLE_WEBDATASET else distr_dl)):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if i == 201 and args.flops_profiler:
raise StopIteration("Profiler has finished running. Stopping training early.")
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME, epoch=epoch)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| robvanvolt | d6107ccc24f2fbbc72e1fabbd97f01e6b143606d | 2eceb841b4a5795a56165a941b69a30da4cad3e6 | I did it for training on my older computers, as I always ran out of RAM, so it was much easier for me with the default being 128 - I don't mind, we can revert it again!:) | robvanvolt | 30 |
lucidrains/DALLE-pytorch | 256 | Deepspeed fix : save the normal model too | useful to be able to get the model for generation even when using deepspeed | null | 2021-05-26 09:26:12+00:00 | 2021-06-05 20:24:24+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4 | 80996978cbb7390f981a0832d972e4d8f5bae945 | Can we get a `time.sleep(2)` after this line? In particular, when using DeepSpeed there is a lot of scrollback and this message gets lost almost immediately. | afiaka87 | 31 |
lucidrains/DALLE-pytorch | 256 | Deepspeed fix : save the normal model too | useful to be able to get the model for generation even when using deepspeed | null | 2021-05-26 09:26:12+00:00 | 2021-06-05 20:24:24+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4 | 80996978cbb7390f981a0832d972e4d8f5bae945 | One nice thing about python's tracebacks is that you get to see the comment if an error occurs directly on a given line. I've made an wiki page you can link to instead here - which will be clickable in many terminals as well.
https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints | afiaka87 | 32 |
lucidrains/DALLE-pytorch | 256 | Deepspeed fix : save the normal model too | useful to be able to get the model for generation even when using deepspeed | null | 2021-05-26 09:26:12+00:00 | 2021-06-05 20:24:24+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4 | 80996978cbb7390f981a0832d972e4d8f5bae945 | Nitpicking at this point -
How about this instead?
```python
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
``` | afiaka87 | 33 |
lucidrains/DALLE-pytorch | 256 | Deepspeed fix : save the normal model too | useful to be able to get the model for generation even when using deepspeed | null | 2021-05-26 09:26:12+00:00 | 2021-06-05 20:24:24+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4 | 80996978cbb7390f981a0832d972e4d8f5bae945 | agreed, done | rom1504 | 34 |
lucidrains/DALLE-pytorch | 256 | Deepspeed fix : save the normal model too | useful to be able to get the model for generation even when using deepspeed | null | 2021-05-26 09:26:12+00:00 | 2021-06-05 20:24:24+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4 | 80996978cbb7390f981a0832d972e4d8f5bae945 | do you mean putting that in the comment ? if so, done | rom1504 | 35 |
lucidrains/DALLE-pytorch | 256 | Deepspeed fix : save the normal model too | useful to be able to get the model for generation even when using deepspeed | null | 2021-05-26 09:26:12+00:00 | 2021-06-05 20:24:24+00:00 | train_dalle.py | import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import time
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--save_every_n_steps', default = 1000, type = int, help = 'Save a checkpoint every n steps')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name + ".pt"
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
SAVE_EVERY_N_STEPS = args.save_every_n_steps
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2:
print(f"Checkpoints made with DeepSpeed ZeRO Stages 2 and 3 will be stored in deepspeed checkpoint folder")
print(f"As such, they will require DeepSpeed as a dependency in order to resume from or generate with.")
print("See the deespeed conversion script for details on how to convert your ZeRO stage 2/3 checkpoint to a single file.")
print("If using a single GPU, consider running with apex automatic mixed precision instead for a similar speedup to ZeRO.")
time.sleep(2)
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
if deepspeed_config.get('zero_optimization', {}).get('stage', 0) >= 2: # see https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
# Saves a checkpoint before training begins to fail early when mis-configured.
# See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
save_model(DALLE_OUTPUT_FILE_NAME)
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if i % 10 == 0 and distr_backend.is_root_worker():
t = time.time()
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % SAVE_EVERY_N_STEPS == 0:
save_model(DALLE_OUTPUT_FILE_NAME)
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if i % 10 == 9 and distr_backend.is_root_worker():
sample_per_sec = BATCH_SIZE * 10 / (time.time() - t)
log["sample_per_sec"] = sample_per_sec
print(epoch, i, f'sample_per_sec - {sample_per_sec}')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(avg_loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | 1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4 | 80996978cbb7390f981a0832d972e4d8f5bae945 | done | rom1504 | 36 |
lucidrains/DALLE-pytorch | 244 | save and report dalle.pt at the end of each epoch | also add a parameter to allow specifying a different name to avoid overwriting
if running 2 dalle on the same folder
this solves:
* avoid using a lot of disk with one model every 100 steps under wandb folder
* avoid confusion and errors while running 2 models in the same folder | null | 2021-05-11 21:38:16+00:00 | 2021-05-25 15:44:18+00:00 | train_dalle.py | import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
save_model(f'./dalle.pt')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
save_model(f'./dalle-final.pt')
if distr_backend.is_root_worker():
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | bdb04280c9ab55eb20f86b375dc1aad20fbd5315 | e7d3d2792c8a1747c2efcef03003ac538522b400 | Please move this out of the `if` block so `save_model` is called on every worker. | janEbert | 37 |
lucidrains/DALLE-pytorch | 244 | save and report dalle.pt at the end of each epoch | also add a parameter to allow specifying a different name to avoid overwriting
if running 2 dalle on the same folder
this solves:
* avoid using a lot of disk with one model every 100 steps under wandb folder
* avoid confusion and errors while running 2 models in the same folder | null | 2021-05-11 21:38:16+00:00 | 2021-05-25 15:44:18+00:00 | train_dalle.py | import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
save_model(f'./dalle.pt')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
save_model(f'./dalle-final.pt')
if distr_backend.is_root_worker():
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | bdb04280c9ab55eb20f86b375dc1aad20fbd5315 | e7d3d2792c8a1747c2efcef03003ac538522b400 | why do we want to save the model in every worker ?
for example that wouldn't work in my case where the file system is shared between workers | rom1504 | 38 |
lucidrains/DALLE-pytorch | 244 | save and report dalle.pt at the end of each epoch | also add a parameter to allow specifying a different name to avoid overwriting
if running 2 dalle on the same folder
this solves:
* avoid using a lot of disk with one model every 100 steps under wandb folder
* avoid confusion and errors while running 2 models in the same folder | null | 2021-05-11 21:38:16+00:00 | 2021-05-25 15:44:18+00:00 | train_dalle.py | import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
save_model(f'./dalle.pt')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
save_model(f'./dalle-final.pt')
if distr_backend.is_root_worker():
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | bdb04280c9ab55eb20f86b375dc1aad20fbd5315 | e7d3d2792c8a1747c2efcef03003ac538522b400 | Please take a look at the definition of `save_model`. The method has another guard inside. It's important to call this on all workers so DeepSpeed checkpoints are correctly handled. | janEbert | 39 |
lucidrains/DALLE-pytorch | 244 | save and report dalle.pt at the end of each epoch | also add a parameter to allow specifying a different name to avoid overwriting
if running 2 dalle on the same folder
this solves:
* avoid using a lot of disk with one model every 100 steps under wandb folder
* avoid confusion and errors while running 2 models in the same folder | null | 2021-05-11 21:38:16+00:00 | 2021-05-25 15:44:18+00:00 | train_dalle.py | import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
save_model(f'./dalle.pt')
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
save_model(f'./dalle-final.pt')
if distr_backend.is_root_worker():
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your BPE json file')
parser.add_argument('--dalle_output_file_name', type=str, default = "dalle.pt",
help='output_file_name')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
train_group = parser.add_argument_group('Training settings')
train_group.add_argument('--epochs', default = 20, type = int, help = 'Number of epochs')
train_group.add_argument('--batch_size', default = 4, type = int, help = 'Batch size')
train_group.add_argument('--learning_rate', default = 3e-4, type = float, help = 'Learning rate')
train_group.add_argument('--clip_grad_norm', default = 0.5, type = float, help = 'Clip gradient norm')
train_group.add_argument('--lr_decay', dest = 'lr_decay', action = 'store_true')
model_group = parser.add_argument_group('Model settings')
model_group.add_argument('--dim', default = 512, type = int, help = 'Model dimension')
model_group.add_argument('--text_seq_len', default = 256, type = int, help = 'Text sequence length')
model_group.add_argument('--depth', default = 2, type = int, help = 'Model depth')
model_group.add_argument('--heads', default = 8, type = int, help = 'Model number of heads')
model_group.add_argument('--dim_head', default = 64, type = int, help = 'Model head dimension')
model_group.add_argument('--reversible', dest = 'reversible', action='store_true')
model_group.add_argument('--loss_img_weight', default = 7, type = int, help = 'Image loss weight')
model_group.add_argument('--attn_types', default = 'full', type = str, help = 'comma separated list of attention types. attention type can be: full or sparse or axial_row or axial_col or conv_like.')
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
def get_trainable_params(model):
return [params for params in model.parameters() if params.requires_grad]
def cp_path_to_dir(cp_path, tag):
"""Convert a checkpoint path to a directory with `tag` inserted.
If `cp_path` is already a directory, return it unchanged.
"""
if not isinstance(cp_path, Path):
cp_path = Path(cp_path)
if cp_path.is_dir():
return cp_path
path_sans_extension = cp_path.parent / cp_path.stem
cp_dir = Path(f'{path_sans_extension}-{tag}-cp')
return cp_dir
# constants
DALLE_OUTPUT_FILE_NAME = args.dalle_output_file_name
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = args.epochs
BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
GRAD_CLIP_NORM = args.clip_grad_norm
LR_DECAY = args.lr_decay
MODEL_DIM = args.dim
TEXT_SEQ_LEN = args.text_seq_len
DEPTH = args.depth
HEADS = args.heads
DIM_HEAD = args.dim_head
REVERSIBLE = args.reversible
LOSS_IMG_WEIGHT = args.loss_img_weight
ATTN_TYPES = tuple(args.attn_types.split(','))
DEEPSPEED_CP_AUX_FILENAME = 'auxiliary.pt'
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
if using_deepspeed:
cp_dir = cp_path_to_dir(dalle_path, 'ds')
assert cp_dir.is_dir(), \
f'DeepSpeed checkpoint directory {cp_dir} not found'
dalle_path = cp_dir / DEEPSPEED_CP_AUX_FILENAME
else:
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
assert not vae_path.is_dir(), \
('Cannot load VAE model from directory; please use a '
'standard *.pt checkpoint. '
'Currently, merging a DeepSpeed-partitioned VAE into a DALLE '
'model is not supported.')
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT,
attn_types=ATTN_TYPES,
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME and not using_deepspeed:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(get_trainable_params(dalle), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=get_trainable_params(dalle),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
if RESUME and using_deepspeed:
distr_dalle.load_checkpoint(str(cp_dir))
def save_model(path):
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
}
if using_deepspeed:
cp_dir = cp_path_to_dir(path, 'ds')
distr_dalle.save_checkpoint(cp_dir, client_state=save_obj)
if not distr_backend.is_root_worker():
return
# Save auxiliary values so we can reuse the standard routine
# for loading.
save_obj = {
**save_obj,
# Save a nonsense value that directs the user to
# further help.
'weights': (
'To get a working standard checkpoint, '
'look into consolidating DeepSpeed checkpoints.'
),
}
torch.save(save_obj, str(cp_dir / DEEPSPEED_CP_AUX_FILENAME))
return
if not distr_backend.is_root_worker():
return
save_obj = {
**save_obj,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
# training
for epoch in range(EPOCHS):
if data_sampler:
data_sampler.set_epoch(epoch)
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
log = {}
if i % 10 == 0 and distr_backend.is_root_worker():
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
if distr_backend.is_root_worker():
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
if distr_backend.is_root_worker():
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
save_model(DALLE_OUTPUT_FILE_NAME)
if distr_backend.is_root_worker():
wandb.save(DALLE_OUTPUT_FILE_NAME)
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file(DALLE_OUTPUT_FILE_NAME)
run.log_artifact(model_artifact)
wandb.finish()
| rom1504 | bdb04280c9ab55eb20f86b375dc1aad20fbd5315 | e7d3d2792c8a1747c2efcef03003ac538522b400 | got it, I made the change | rom1504 | 40 |
lucidrains/DALLE-pytorch | 207 | Fix various DeepSpeed issues | Revert #204 which disabled GPU usage for VAE training.
Fix #161, fix #185.
- We now let DeepSpeed handle converting the model to FP16 and moving it to GPU(s).
- Remove hacks regarding DeepSpeed and GPU memory usage.
- Register external parameters (could probably be detected automatically with DeepSpeed >= 0.3.15 but we (1) support compatibility for older versions and (2) don't care in case they can _not_ automatically be detected). | null | 2021-04-20 14:25:47+00:00 | 2021-04-20 15:43:24+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
torch.cuda.empty_cache() # Avoid allocation error due to potential bug in deepspeed. See https://github.com/lucidrains/DALLE-pytorch/issues/161
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| janEbert | c9d371281e7d6f7e9fdde7cf0248a64e10dc74c0 | 9cce36d4bd9fb1ff590209a843ffe048d69ee59c | ah - finally :) | afiaka87 | 41 |
lucidrains/DALLE-pytorch | 207 | Fix various DeepSpeed issues | Revert #204 which disabled GPU usage for VAE training.
Fix #161, fix #185.
- We now let DeepSpeed handle converting the model to FP16 and moving it to GPU(s).
- Remove hacks regarding DeepSpeed and GPU memory usage.
- Register external parameters (could probably be detected automatically with DeepSpeed >= 0.3.15 but we (1) support compatibility for older versions and (2) don't care in case they can _not_ automatically be detected). | null | 2021-04-20 14:25:47+00:00 | 2021-04-20 15:43:24+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
torch.cuda.empty_cache() # Avoid allocation error due to potential bug in deepspeed. See https://github.com/lucidrains/DALLE-pytorch/issues/161
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY:
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| janEbert | c9d371281e7d6f7e9fdde7cf0248a64e10dc74c0 | 9cce36d4bd9fb1ff590209a843ffe048d69ee59c | Let's hope that's the last of it. ;) | janEbert | 42 |
lucidrains/DALLE-pytorch | 207 | Fix various DeepSpeed issues | Revert #204 which disabled GPU usage for VAE training.
Fix #161, fix #185.
- We now let DeepSpeed handle converting the model to FP16 and moving it to GPU(s).
- Remove hacks regarding DeepSpeed and GPU memory usage.
- Register external parameters (could probably be detected automatically with DeepSpeed >= 0.3.15 but we (1) support compatibility for older versions and (2) don't care in case they can _not_ automatically be detected). | null | 2021-04-20 14:25:47+00:00 | 2021-04-20 15:43:24+00:00 | train_vae.py | import math
from math import sqrt
import argparse
# torch
import torch
from torch.optim import Adam
from torch.optim.lr_scheduler import ExponentialLR
# vision imports
from torchvision import transforms as T
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import DiscreteVAE
# argument parsing
parser = argparse.ArgumentParser()
parser.add_argument('--image_folder', type = str, required = True,
help='path to your folder of images for learning the discrete VAE and its codebook')
parser.add_argument('--image_size', type = int, required = False, default = 128,
help='image size')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# constants
IMAGE_SIZE = args.image_size
IMAGE_PATH = args.image_folder
EPOCHS = 20
BATCH_SIZE = 8
LEARNING_RATE = 1e-3
LR_DECAY_RATE = 0.98
NUM_TOKENS = 8192
NUM_LAYERS = 2
NUM_RESNET_BLOCKS = 2
SMOOTH_L1_LOSS = False
EMB_DIM = 512
HID_DIM = 256
KL_LOSS_WEIGHT = 0
STARTING_TEMP = 1.
TEMP_MIN = 0.5
ANNEAL_RATE = 1e-6
NUM_IMAGES_SAVE = 4
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# data
ds = ImageFolder(
IMAGE_PATH,
T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize(IMAGE_SIZE),
T.CenterCrop(IMAGE_SIZE),
T.ToTensor()
])
)
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, BATCH_SIZE, shuffle = not data_sampler, sampler=data_sampler)
vae_params = dict(
image_size = IMAGE_SIZE,
num_layers = NUM_LAYERS,
num_tokens = NUM_TOKENS,
codebook_dim = EMB_DIM,
hidden_dim = HID_DIM,
num_resnet_blocks = NUM_RESNET_BLOCKS
)
vae = DiscreteVAE(
**vae_params,
smooth_l1_loss = SMOOTH_L1_LOSS,
kl_div_loss_weight = KL_LOSS_WEIGHT
)
assert len(ds) > 0, 'folder does not contain any images'
if distr_backend.is_root_worker():
print(f'{len(ds)} images found for training')
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': vae_params,
'weights': vae.state_dict()
}
torch.save(save_obj, path)
# optimizer
opt = Adam(vae.parameters(), lr = LEARNING_RATE)
sched = ExponentialLR(optimizer = opt, gamma = LR_DECAY_RATE)
if distr_backend.is_root_worker():
# weights & biases experiment tracking
import wandb
model_config = dict(
num_tokens = NUM_TOKENS,
smooth_l1_loss = SMOOTH_L1_LOSS,
num_resnet_blocks = NUM_RESNET_BLOCKS,
kl_loss_weight = KL_LOSS_WEIGHT
)
run = wandb.init(
project = 'dalle_train_vae',
job_type = 'train_model',
config = model_config
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {'train_batch_size': BATCH_SIZE}
(distr_vae, distr_opt, distr_dl, distr_sched) = distr_backend.distribute(
args=args,
model=vae,
optimizer=opt,
model_parameters=vae.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=sched,
config_params=deepspeed_config,
)
# starting temperature
global_step = 0
temp = STARTING_TEMP
for epoch in range(EPOCHS):
for i, (images, _) in enumerate(distr_dl):
images = images.cuda()
loss, recons = distr_vae(
images,
return_loss = True,
return_recons = True,
temp = temp
)
if using_deepspeed:
# Gradients are automatically zeroed after the step
distr_vae.backward(loss)
distr_vae.step()
else:
distr_opt.zero_grad()
loss.backward()
distr_opt.step()
logs = {}
if i % 100 == 0:
if distr_backend.is_root_worker():
k = NUM_IMAGES_SAVE
with torch.no_grad():
codes = vae.get_codebook_indices(images[:k])
hard_recons = vae.decode(codes)
images, recons = map(lambda t: t[:k], (images, recons))
images, recons, hard_recons, codes = map(lambda t: t.detach().cpu(), (images, recons, hard_recons, codes))
images, recons, hard_recons = map(lambda t: make_grid(t.float(), nrow = int(sqrt(k)), normalize = True, range = (-1, 1)), (images, recons, hard_recons))
logs = {
**logs,
'sample images': wandb.Image(images, caption = 'original images'),
'reconstructions': wandb.Image(recons, caption = 'reconstructions'),
'hard reconstructions': wandb.Image(hard_recons, caption = 'hard reconstructions'),
'codebook_indices': wandb.Histogram(codes),
'temperature': temp
}
save_model(f'./vae.pt')
wandb.save('./vae.pt')
# temperature anneal
temp = max(temp * math.exp(-ANNEAL_RATE * global_step), TEMP_MIN)
# lr decay
distr_sched.step()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
if i % 10 == 0:
lr = distr_sched.get_last_lr()[0]
print(epoch, i, f'lr - {lr:6f} loss - {avg_loss.item()}')
logs = {
**logs,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item(),
'lr': lr
}
wandb.log(logs)
global_step += 1
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-vae', type = 'model', metadata = dict(model_config))
model_artifact.add_file('vae.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
# save final vae and cleanup
save_model('./vae-final.pt')
wandb.save('./vae-final.pt')
model_artifact = wandb.Artifact('trained-vae', type = 'model', metadata = dict(model_config))
model_artifact.add_file('vae-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import math
from math import sqrt
import argparse
# torch
import torch
from torch.optim import Adam
from torch.optim.lr_scheduler import ExponentialLR
# vision imports
from torchvision import transforms as T
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import DiscreteVAE
# argument parsing
parser = argparse.ArgumentParser()
parser.add_argument('--image_folder', type = str, required = True,
help='path to your folder of images for learning the discrete VAE and its codebook')
parser.add_argument('--image_size', type = int, required = False, default = 128,
help='image size')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# constants
IMAGE_SIZE = args.image_size
IMAGE_PATH = args.image_folder
EPOCHS = 20
BATCH_SIZE = 8
LEARNING_RATE = 1e-3
LR_DECAY_RATE = 0.98
NUM_TOKENS = 8192
NUM_LAYERS = 2
NUM_RESNET_BLOCKS = 2
SMOOTH_L1_LOSS = False
EMB_DIM = 512
HID_DIM = 256
KL_LOSS_WEIGHT = 0
STARTING_TEMP = 1.
TEMP_MIN = 0.5
ANNEAL_RATE = 1e-6
NUM_IMAGES_SAVE = 4
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# data
ds = ImageFolder(
IMAGE_PATH,
T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize(IMAGE_SIZE),
T.CenterCrop(IMAGE_SIZE),
T.ToTensor()
])
)
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, BATCH_SIZE, shuffle = not data_sampler, sampler=data_sampler)
vae_params = dict(
image_size = IMAGE_SIZE,
num_layers = NUM_LAYERS,
num_tokens = NUM_TOKENS,
codebook_dim = EMB_DIM,
hidden_dim = HID_DIM,
num_resnet_blocks = NUM_RESNET_BLOCKS
)
vae = DiscreteVAE(
**vae_params,
smooth_l1_loss = SMOOTH_L1_LOSS,
kl_div_loss_weight = KL_LOSS_WEIGHT
)
if not using_deepspeed:
vae = vae.cuda()
assert len(ds) > 0, 'folder does not contain any images'
if distr_backend.is_root_worker():
print(f'{len(ds)} images found for training')
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': vae_params,
'weights': vae.state_dict()
}
torch.save(save_obj, path)
# optimizer
opt = Adam(vae.parameters(), lr = LEARNING_RATE)
sched = ExponentialLR(optimizer = opt, gamma = LR_DECAY_RATE)
if distr_backend.is_root_worker():
# weights & biases experiment tracking
import wandb
model_config = dict(
num_tokens = NUM_TOKENS,
smooth_l1_loss = SMOOTH_L1_LOSS,
num_resnet_blocks = NUM_RESNET_BLOCKS,
kl_loss_weight = KL_LOSS_WEIGHT
)
run = wandb.init(
project = 'dalle_train_vae',
job_type = 'train_model',
config = model_config
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {'train_batch_size': BATCH_SIZE}
(distr_vae, distr_opt, distr_dl, distr_sched) = distr_backend.distribute(
args=args,
model=vae,
optimizer=opt,
model_parameters=vae.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=sched,
config_params=deepspeed_config,
)
# starting temperature
global_step = 0
temp = STARTING_TEMP
for epoch in range(EPOCHS):
for i, (images, _) in enumerate(distr_dl):
images = images.cuda()
loss, recons = distr_vae(
images,
return_loss = True,
return_recons = True,
temp = temp
)
if using_deepspeed:
# Gradients are automatically zeroed after the step
distr_vae.backward(loss)
distr_vae.step()
else:
distr_opt.zero_grad()
loss.backward()
distr_opt.step()
logs = {}
if i % 100 == 0:
if distr_backend.is_root_worker():
k = NUM_IMAGES_SAVE
with torch.no_grad():
codes = vae.get_codebook_indices(images[:k])
hard_recons = vae.decode(codes)
images, recons = map(lambda t: t[:k], (images, recons))
images, recons, hard_recons, codes = map(lambda t: t.detach().cpu(), (images, recons, hard_recons, codes))
images, recons, hard_recons = map(lambda t: make_grid(t.float(), nrow = int(sqrt(k)), normalize = True, range = (-1, 1)), (images, recons, hard_recons))
logs = {
**logs,
'sample images': wandb.Image(images, caption = 'original images'),
'reconstructions': wandb.Image(recons, caption = 'reconstructions'),
'hard reconstructions': wandb.Image(hard_recons, caption = 'hard reconstructions'),
'codebook_indices': wandb.Histogram(codes),
'temperature': temp
}
save_model(f'./vae.pt')
wandb.save('./vae.pt')
# temperature anneal
temp = max(temp * math.exp(-ANNEAL_RATE * global_step), TEMP_MIN)
# lr decay
distr_sched.step()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
if i % 10 == 0:
lr = distr_sched.get_last_lr()[0]
print(epoch, i, f'lr - {lr:6f} loss - {avg_loss.item()}')
logs = {
**logs,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item(),
'lr': lr
}
wandb.log(logs)
global_step += 1
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-vae', type = 'model', metadata = dict(model_config))
model_artifact.add_file('vae.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
# save final vae and cleanup
save_model('./vae-final.pt')
wandb.save('./vae-final.pt')
model_artifact = wandb.Artifact('trained-vae', type = 'model', metadata = dict(model_config))
model_artifact.add_file('vae-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| janEbert | c9d371281e7d6f7e9fdde7cf0248a64e10dc74c0 | 9cce36d4bd9fb1ff590209a843ffe048d69ee59c | oh wow - forgout about the cuda call to the vae. | afiaka87 | 43 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | .gitignore | # dall-e generation outputs
outputs/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
| # dall-e generation outputs
outputs/
*.pt
taming/
wandb/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | hmm, perhaps this should be *.pt ? | lucidrains | 44 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | Please negate this, we want to _avoid_ shuffling only if using Horovod. :) | janEbert | 45 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | You can remove this `argparse` import; it's already imported (and the PEP 8 import order would be violated here as `argparse` is in the stdlib). | janEbert | 46 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | I thought you wanted those kwargs without spaces. ;) | janEbert | 47 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | Kwarg with inconsistent spaces here. | janEbert | 48 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | well since you're in support of it ha - i went ahead and ran autopep8 on train_dalle.py. I don't really mind one way or the other so long as we're all on the same page. | afiaka87 | 49 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | It would appear that I do, but only on code I touch ha. I just ran autopep8 on it. | afiaka87 | 50 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | i thought @janEbert removed this in an earlier pull request | lucidrains | 51 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | Yeah, you can remove this line. :) | janEbert | 52 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | Here, please keep moving the model to the GPU (`dalle = dalle.cuda()`) in the `if not using_deepspeed` block. We want to let DeepSpeed handle everything after model creation, both FP16 conversion and moving to GPU. That's why I put it into the `if`-block as well. | janEbert | 53 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | I like consistency, but it's not my codebase, so my opinion really doesn't matter. :p | janEbert | 54 |
lucidrains/DALLE-pytorch | 205 | Refactor ImageTextDataset to its own file. Implement error handling a… | …nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py. | null | 2021-04-19 21:22:09+00:00 | 2021-04-21 23:23:14+00:00 | train_dalle.py | import argparse
from random import choice
from pathlib import Path
# torch
import torch
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
# vision imports
from PIL import Image
from torchvision import transforms as T
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid, save_image
# dalle related classes and utils
from dalle_pytorch import distributed_utils
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required = False)
group.add_argument('--vae_path', type = str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type = str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type = str, required = True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type = float, default = 0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action = 'store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action = 'store_true')
parser.add_argument('--bpe_path', type = str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens = tokenizer.vocab_size,
text_seq_len = TEXT_SEQ_LEN,
dim = MODEL_DIM,
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD,
reversible = REVERSIBLE,
loss_img_weight = LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# dataset loading
class TextImageDataset(Dataset):
def __init__(self, folder, text_len = 256, image_size = 128):
super().__init__()
path = Path(folder)
text_files = [*path.glob('**/*.txt')]
image_files = [
*path.glob('**/*.png'),
*path.glob('**/*.jpg'),
*path.glob('**/*.jpeg'),
*path.glob('**/*.bmp')
]
text_files = {t.stem: t for t in text_files}
image_files = {i.stem: i for i in image_files}
keys = (image_files.keys() & text_files.keys())
self.keys = list(keys)
self.text_files = {k: v for k, v in text_files.items() if k in keys}
self.image_files = {k: v for k, v in image_files.items() if k in keys}
self.text_len = text_len
self.image_tranform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(image_size, scale = (args.resize_ratio, 1.), ratio = (1., 1.)),
T.ToTensor()
])
def __len__(self):
return len(self.keys)
def __getitem__(self, ind):
key = self.keys[ind]
text_file = self.text_files[key]
image_file = self.image_files[key]
image = Image.open(image_file)
descriptions = text_file.read_text().split('\n')
descriptions = list(filter(lambda t: len(t) > 0, descriptions))
description = choice(descriptions)
tokenized_text = tokenizer.tokenize(description, self.text_len, truncate_text=args.truncate_captions).squeeze(0)
image_tensor = self.image_tranform(image)
return tokenized_text, image_tensor
# create dataset and dataloader
ds = TextImageDataset(
args.image_text_folder,
text_len = TEXT_SEQ_LEN,
image_size = IMAGE_SIZE
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if distributed_utils.using_backend(distributed_utils.HorovodBackend):
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds, num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank())
else:
data_sampler = None
dl = DataLoader(ds, batch_size = BATCH_SIZE, shuffle = not data_sampler,
drop_last = True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae = vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr = LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode = "min",
factor = 0.5,
patience = 10,
cooldown = 10,
min_lr = 1e-6,
verbose = True,
)
if distr_backend.is_root_worker():
# experiment tracker
import wandb
model_config = dict(
depth = DEPTH,
heads = HEADS,
dim_head = DIM_HEAD
)
run = wandb.init(
project = 'dalle_train_transformer',
resume = RESUME,
config = model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss = True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres = 0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption = decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type = 'model', metadata = dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| import argparse
from pathlib import Path
import torch
import wandb # Quit early if user doesn't have wandb installed.
from torch.nn.utils import clip_grad_norm_
from torch.optim import Adam
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import DataLoader
from dalle_pytorch import OpenAIDiscreteVAE, VQGanVAE1024, DiscreteVAE, DALLE
from dalle_pytorch import distributed_utils
from dalle_pytorch.loader import TextImageDataset
from dalle_pytorch.tokenizer import tokenizer, HugTokenizer, ChineseTokenizer, YttmTokenizer
# argument parsing
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument('--vae_path', type=str,
help='path to your trained discrete VAE')
group.add_argument('--dalle_path', type=str,
help='path to your partially trained DALL-E')
parser.add_argument('--image_text_folder', type=str, required=True,
help='path to your folder of images and text for learning the DALL-E')
parser.add_argument('--truncate_captions', dest='truncate_captions', action='store_true',
help='Captions passed in which exceed the max token length will be truncated if this is set.')
parser.add_argument('--random_resize_crop_lower_ratio', dest='resize_ratio', type=float, default=0.75,
help='Random resized crop lower ratio')
parser.add_argument('--chinese', dest='chinese', action='store_true')
parser.add_argument('--taming', dest='taming', action='store_true')
parser.add_argument('--hug', dest='hug', action='store_true')
parser.add_argument('--bpe_path', type=str,
help='path to your huggingface BPE json file')
parser.add_argument('--fp16', action='store_true',
help='(experimental) - Enable DeepSpeed 16 bit precision. Reduces VRAM.')
parser.add_argument('--wandb_name', default='dalle_train_transformer',
help='Name W&B will use when saving results.\ne.g. `--wandb_name "coco2017-full-sparse"`')
parser = distributed_utils.wrap_arg_parser(parser)
args = parser.parse_args()
# quit early if you used the wrong folder name
assert Path(args.image_text_folder).exists(), f'The path {args.image_text_folder} was not found.'
# helpers
def exists(val):
return val is not None
# constants
VAE_PATH = args.vae_path
DALLE_PATH = args.dalle_path
RESUME = exists(DALLE_PATH)
EPOCHS = 20
BATCH_SIZE = 4
LEARNING_RATE = 3e-4
GRAD_CLIP_NORM = 0.5
MODEL_DIM = 512
TEXT_SEQ_LEN = 256
DEPTH = 2
HEADS = 4
DIM_HEAD = 64
REVERSIBLE = True
LOSS_IMG_WEIGHT = 7
LR_DECAY = False
# initialize distributed backend
distr_backend = distributed_utils.set_backend_from_args(args)
distr_backend.initialize()
using_deepspeed = \
distributed_utils.using_backend(distributed_utils.DeepSpeedBackend)
# tokenizer
if exists(args.bpe_path):
klass = HugTokenizer if args.hug else YttmTokenizer
tokenizer = klass(args.bpe_path)
elif args.chinese:
tokenizer = ChineseTokenizer()
# reconstitute vae
if RESUME:
dalle_path = Path(DALLE_PATH)
assert dalle_path.exists(), 'DALL-E model file does not exist'
loaded_obj = torch.load(str(dalle_path), map_location='cpu')
dalle_params, vae_params, weights = loaded_obj['hparams'], loaded_obj['vae_params'], loaded_obj['weights']
if vae_params is not None:
vae = DiscreteVAE(**vae_params)
else:
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
dalle_params = dict(
**dalle_params
)
IMAGE_SIZE = vae.image_size
else:
if exists(VAE_PATH):
vae_path = Path(VAE_PATH)
assert vae_path.exists(), 'VAE model file does not exist'
loaded_obj = torch.load(str(vae_path))
vae_params, weights = loaded_obj['hparams'], loaded_obj['weights']
vae = DiscreteVAE(**vae_params)
vae.load_state_dict(weights)
else:
if distr_backend.is_root_worker():
print('using pretrained VAE for encoding images to tokens')
vae_params = None
vae_klass = OpenAIDiscreteVAE if not args.taming else VQGanVAE1024
vae = vae_klass()
IMAGE_SIZE = vae.image_size
dalle_params = dict(
num_text_tokens=tokenizer.vocab_size,
text_seq_len=TEXT_SEQ_LEN,
dim=MODEL_DIM,
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD,
reversible=REVERSIBLE,
loss_img_weight=LOSS_IMG_WEIGHT
)
# configure OpenAI VAE for float16s
if isinstance(vae, OpenAIDiscreteVAE) and args.fp16:
vae.enc.blocks.output.conv.use_float16 = True
# helpers
def save_model(path):
if not distr_backend.is_root_worker():
return
save_obj = {
'hparams': dalle_params,
'vae_params': vae_params,
'weights': dalle.state_dict()
}
torch.save(save_obj, path)
def group_weight(model):
group_decay, group_no_decay = [], []
for params in model.named_parameters():
if 'transformer' in params[0]:
if 'bias' in params[0] or 'norm' in params[0]:
group_no_decay.append(params[1])
continue
group_decay.append(params[1])
assert len(list(model.parameters())) == len(group_decay) + len(group_no_decay)
groups = [dict(params=group_decay), dict(params=group_no_decay, weight_decay=.0)]
return groups
# create dataset and dataloader
is_shuffle = not distributed_utils.using_backend(distributed_utils.HorovodBackend)
ds = TextImageDataset(
args.image_text_folder,
text_len=TEXT_SEQ_LEN,
image_size=IMAGE_SIZE,
resize_ratio=args.resize_ratio,
truncate_captions=args.truncate_captions,
tokenizer=tokenizer,
shuffle=is_shuffle,
)
assert len(ds) > 0, 'dataset is empty'
if distr_backend.is_root_worker():
print(f'{len(ds)} image-text pairs found for training')
if not is_shuffle:
data_sampler = torch.utils.data.distributed.DistributedSampler(
ds,
num_replicas=distr_backend.get_world_size(),
rank=distr_backend.get_rank()
)
else:
data_sampler = None
dl = DataLoader(ds, batch_size=BATCH_SIZE, shuffle=is_shuffle, drop_last=True, sampler=data_sampler)
# initialize DALL-E
dalle = DALLE(vae=vae, **dalle_params)
if not using_deepspeed:
if args.fp16:
dalle = dalle.half()
dalle = dalle.cuda()
if RESUME:
dalle.load_state_dict(weights)
# optimizer
opt = Adam(dalle.parameters(), lr=LEARNING_RATE)
if LR_DECAY:
scheduler = ReduceLROnPlateau(
opt,
mode="min",
factor=0.5,
patience=10,
cooldown=10,
min_lr=1e-6,
verbose=True,
)
if distr_backend.is_root_worker():
# experiment tracker
model_config = dict(
depth=DEPTH,
heads=HEADS,
dim_head=DIM_HEAD
)
run = wandb.init(
project=args.wandb_name, # 'dalle_train_transformer' by default
resume=RESUME,
config=model_config,
)
# distribute
distr_backend.check_batch_size(BATCH_SIZE)
deepspeed_config = {
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}
(distr_dalle, distr_opt, distr_dl, distr_scheduler) = distr_backend.distribute(
args=args,
model=dalle,
optimizer=opt,
model_parameters=dalle.parameters(),
training_data=ds if using_deepspeed else dl,
lr_scheduler=scheduler if LR_DECAY else None,
config_params=deepspeed_config,
)
avoid_model_calls = using_deepspeed and args.fp16
# training
for epoch in range(EPOCHS):
for i, (text, images) in enumerate(distr_dl):
if args.fp16:
images = images.half()
text, images = map(lambda t: t.cuda(), (text, images))
loss = distr_dalle(text, images, return_loss=True)
if using_deepspeed:
distr_dalle.backward(loss)
distr_dalle.step()
# Gradients are automatically zeroed after the step
else:
loss.backward()
clip_grad_norm_(distr_dalle.parameters(), GRAD_CLIP_NORM)
distr_opt.step()
distr_opt.zero_grad()
# Collective loss, averaged
avg_loss = distr_backend.average_all(loss)
if distr_backend.is_root_worker():
log = {}
if i % 10 == 0:
print(epoch, i, f'loss - {avg_loss.item()}')
log = {
**log,
'epoch': epoch,
'iter': i,
'loss': avg_loss.item()
}
if i % 100 == 0:
sample_text = text[:1]
token_list = sample_text.masked_select(sample_text != 0).tolist()
decoded_text = tokenizer.decode(token_list)
if not avoid_model_calls:
# CUDA index errors when we don't guard this
image = dalle.generate_images(text[:1], filter_thres=0.9) # topk sampling at 0.9
save_model(f'./dalle.pt')
wandb.save(f'./dalle.pt')
log = {
**log,
}
if not avoid_model_calls:
log['image'] = wandb.Image(image, caption=decoded_text)
wandb.log(log)
if LR_DECAY and not using_deepspeed:
# Scheduler is automatically progressed after the step when
# using DeepSpeed.
distr_scheduler.step(loss)
if distr_backend.is_root_worker():
# save trained model to wandb as an artifact every epoch's end
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle.pt')
run.log_artifact(model_artifact)
if distr_backend.is_root_worker():
save_model(f'./dalle-final.pt')
wandb.save('./dalle-final.pt')
model_artifact = wandb.Artifact('trained-dalle', type='model', metadata=dict(model_config))
model_artifact.add_file('dalle-final.pt')
run.log_artifact(model_artifact)
wandb.finish()
| afiaka87 | 130da7f21767c3c0cebb1e3622b2c68abc270d76 | 2d314aaed157ce5d734561dc064f2854ebe36866 | Gotcha that was a mis-merge. Good catch. | afiaka87 | 55 |
posativ/isso | 952 | Allow umlaut domains for website addresses | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [x] (If adding features:) I have added tests to cover my changes
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
Changed website validation to allow domain names containing umlauts
## Why is this necessary?
Resolves issue #951 | null | 2023-04-18 06:57:32+00:00 | 2023-08-04 13:01:56+00:00 | isso/views/comments.py | # -*- encoding: utf-8 -*-
import collections
import re
import time
import functools
import json # json.dumps to put URL in <script>
import pkg_resources
from configparser import NoOptionError
from datetime import datetime, timedelta
from html import escape
from io import BytesIO as StringIO
from os import path as os_path
from urllib.parse import unquote, urlparse
from xml.etree import ElementTree as ET
from itsdangerous import SignatureExpired, BadSignature
from werkzeug.exceptions import BadRequest, Forbidden, NotFound
from werkzeug.http import dump_cookie
from werkzeug.routing import Rule
from werkzeug.utils import redirect, send_from_directory
from werkzeug.wrappers import Response
from werkzeug.wsgi import get_current_url
from isso import utils, local
from isso.utils import (http, parse,
JSONResponse as JSON, XMLResponse as XML,
render_template)
from isso.utils.hash import md5, sha1
from isso.views import requires
# from Django apparently, looks good to me *duck*
__url_re = re.compile(
r'^'
r'(https?://)?'
# domain...
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)'
r'$', re.IGNORECASE)
def isurl(text):
return __url_re.match(text) is not None
def normalize(url):
if not url.startswith(("http://", "https://")):
return "http://" + url
return url
def xhr(func):
"""A decorator to check for CSRF on POST/PUT/DELETE using a <form>
element and JS to execute automatically (see #40 for a proof-of-concept).
When an attacker uses a <form> to downvote a comment, the browser *should*
add a `Content-Type: ...` header with three possible values:
* application/x-www-form-urlencoded
* multipart/form-data
* text/plain
If the header is not sent or requests `application/json`, the request is
not forged (XHR is restricted by CORS separately).
"""
"""
@apiDefine csrf
@apiHeader {String="application/json"} Content-Type
The content type must be set to `application/json` to prevent CSRF attacks.
"""
def dec(self, env, req, *args, **kwargs):
if req.content_type and not req.content_type.startswith("application/json"):
raise Forbidden("CSRF")
return func(self, env, req, *args, **kwargs)
return dec
class API(object):
FIELDS = set(['id', 'parent', 'text', 'author', 'website',
'mode', 'created', 'modified', 'likes', 'dislikes', 'hash', 'gravatar_image', 'notification'])
# comment fields, that can be submitted
ACCEPT = set(['text', 'author', 'website', 'email', 'parent', 'title', 'notification'])
VIEWS = [
('fetch', ('GET', '/')),
('new', ('POST', '/new')),
('counts', ('POST', '/count')),
('feed', ('GET', '/feed')),
('latest', ('GET', '/latest')),
('view', ('GET', '/id/<int:id>')),
('edit', ('PUT', '/id/<int:id>')),
('delete', ('DELETE', '/id/<int:id>')),
('unsubscribe', ('GET', '/id/<int:id>/unsubscribe/<string:email>/<string:key>')),
('moderate', ('GET', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('moderate', ('POST', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('like', ('POST', '/id/<int:id>/like')),
('dislike', ('POST', '/id/<int:id>/dislike')),
('demo', ('GET', '/demo/')),
('preview', ('POST', '/preview')),
('config', ('GET', '/config')),
('login', ('POST', '/login/')),
('admin', ('GET', '/admin/'))
]
def __init__(self, isso, hasher):
self.isso = isso
self.hash = hasher.uhash
self.cache = isso.cache
self.signal = isso.signal
self.conf = isso.conf.section("general")
self.moderated = isso.conf.getboolean("moderation", "enabled")
# this is similar to the wordpress setting "Comment author must have a previously approved comment"
try:
self.approve_if_email_previously_approved = isso.conf.getboolean("moderation", "approve-if-email-previously-approved")
except NoOptionError:
self.approve_if_email_previously_approved = False
try:
self.trusted_proxies = list(isso.conf.getiter("server", "trusted-proxies"))
except NoOptionError:
self.trusted_proxies = []
# These configuration records can be read out by client
self.public_conf = {}
self.public_conf["reply-to-self"] = isso.conf.getboolean("guard", "reply-to-self")
self.public_conf["require-email"] = isso.conf.getboolean("guard", "require-email")
self.public_conf["require-author"] = isso.conf.getboolean("guard", "require-author")
self.public_conf["reply-notifications"] = isso.conf.getboolean("general", "reply-notifications")
self.public_conf["gravatar"] = isso.conf.getboolean("general", "gravatar")
if self.public_conf["gravatar"]:
self.public_conf["avatar"] = False
self.public_conf["feed"] = False
rss = isso.conf.section("rss")
if rss and rss.get('base'):
self.public_conf["feed"] = True
self.guard = isso.db.guard
self.threads = isso.db.threads
self.comments = isso.db.comments
for (view, (method, path)) in self.VIEWS:
isso.urls.add(
Rule(path, methods=[method], endpoint=getattr(self, view)))
@classmethod
def verify(cls, comment):
if comment.get("text") is None:
return False, "text is missing"
if not isinstance(comment.get("parent"), (int, type(None))):
return False, "parent must be an integer or null"
for key in ("text", "author", "website", "email"):
if not isinstance(comment.get(key), (str, type(None))):
return False, "%s must be a string or null" % key
if len(comment["text"].rstrip()) < 3:
return False, "text is too short (minimum length: 3)"
if len(comment["text"]) > 65535:
return False, "text is too long (maximum length: 65535)"
if len(comment.get("email") or "") > 254:
return False, "http://tools.ietf.org/html/rfc5321#section-4.5.3"
if comment.get("website"):
if len(comment["website"]) > 254:
return False, "arbitrary length limit"
if not isurl(comment["website"]):
return False, "Website not Django-conform"
return True, ""
# Common definitions for apidoc follow:
"""
@apiDefine plainParam
@apiQuery {Number=0,1} [plain=0]
If set to `1`, the plain text entered by the user will be returned in the comments’ `text` attribute (instead of the rendered markdown).
"""
"""
@apiDefine commentResponse
@apiSuccess {Number} id
The comment’s id (assigned by the server).
@apiSuccess {Number} parent
Id of the comment this comment is a reply to. `null` if this is a top-level-comment.
@apiSuccess {Number=1,2,4} mode
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiSuccess {String} author
The comments’s author’s name or `null`.
@apiSuccess {String} website
The comment’s author’s website or `null`.
@apiSuccess {String} hash
A hash uniquely identifying the comment’s author.
@apiSuccess {Number} created
UNIX timestamp of the time the comment was created (on the server).
@apiSuccess {Number} modified
UNIX timestamp of the time the comment was last modified (on the server). `null` if the comment was not yet modified.
"""
"""
@apiDefine admin Admin access needed
Only available to a logged-in site admin. Requires a valid `admin-session` cookie.
"""
"""
@api {post} /new create new
@apiGroup Comment
@apiName new
@apiVersion 0.12.6
@apiDescription
Creates a new comment. The server issues a cookie per new comment which acts as
an authentication token to modify or delete the comment.
The token is cryptographically signed and expires automatically after 900 seconds (=15min) by default.
@apiUse csrf
@apiQuery {String} uri
The uri of the thread to create the comment on.
@apiBody {String{3...65535}} text
The comment’s raw text.
@apiBody {String} [author]
The comment’s author’s name.
@apiBody {String{...254}} [email]
The comment’s author’s email address.
@apiBody {String{...254}} [website]
The comment’s author’s website’s url. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiBody {Number} [parent]
The parent comment’s id if the new comment is a response to an existing comment.
@apiExample {curl} Create a reply to comment with id 15:
curl 'https://comments.example.com/new?uri=/thread/' -d '{"text": "Stop saying that! *isso*!", "author": "Max Rant", "email": "[email protected]", "parent": 15}' -H 'Content-Type: application/json' -c cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Success after the above request:
HTTP/1.1 201 CREATED
Set-Cookie: 1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
X-Set-Cookie: isso-1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
{
"website": null,
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>Stop saying that! <em>isso</em>!</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "e644f6ee43c0",
"id": 23,
"likes": 0
}
"""
@xhr
@requires(str, 'uri')
def new(self, environ, request, uri):
data = request.json
for field in set(data.keys()) - API.ACCEPT:
data.pop(field)
for key in ("author", "email", "website", "parent"):
data.setdefault(key, None)
valid, reason = API.verify(data)
if not valid:
return BadRequest(reason)
for field in ("author", "email", "website"):
if data.get(field) is not None:
data[field] = escape(data[field], quote=False)
if data.get("website"):
data["website"] = normalize(data["website"])
data['mode'] = 2 if self.moderated else 1
data['remote_addr'] = self._remote_addr(request)
with self.isso.lock:
if uri not in self.threads:
if not data.get('title'):
with http.curl('GET', local("origin"), uri) as resp:
if resp and resp.status == 200:
uri, title = parse.thread(resp.read(), id=uri)
else:
return NotFound('URI does not exist %s')
else:
title = data['title']
thread = self.threads.new(uri, title)
self.signal("comments.new:new-thread", thread)
else:
thread = self.threads[uri]
# notify extensions that the new comment is about to save
self.signal("comments.new:before-save", thread, data)
valid, reason = self.guard.validate(uri, data)
if not valid:
self.signal("comments.new:guard", reason)
raise Forbidden(reason)
with self.isso.lock:
# if email-based auto-moderation enabled, check for previously approved author
# right before approval.
if self.approve_if_email_previously_approved and self.comments.is_previously_approved_author(data['email']):
data['mode'] = 1
rv = self.comments.add(uri, data)
# notify extension, that the new comment has been successfully saved
self.signal("comments.new:after-save", thread, rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
rv["hash"] = self.hash(rv['email'] or rv['remote_addr'])
self.cache.set(
'hash', (rv['email'] or rv['remote_addr']).encode('utf-8'), rv['hash'])
rv = self._add_gravatar_image(rv)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
# success!
self.signal("comments.new:finish", thread, rv)
resp = JSON(rv, 202 if rv["mode"] == 2 else 201)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
def _remote_addr(self, request):
"""Return the anonymized IP address of the requester.
Takes into consideration a potential X-Forwarded-For HTTP header
if a necessary server.trusted-proxies configuration entry is set.
Recipe source: https://stackoverflow.com/a/22936947/636849
"""
remote_addr = request.remote_addr
if self.trusted_proxies:
route = request.access_route + [remote_addr]
remote_addr = next((addr for addr in reversed(route)
if addr not in self.trusted_proxies), remote_addr)
return utils.anonymize(str(remote_addr))
def create_cookie(self, **kwargs):
"""
Setting cookies to SameSite=None requires "Secure" attribute.
For http-only, we need to override the dump_cookie() default SameSite=None
or the cookie will be rejected.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite#samesitenone_requires_secure
"""
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
samesite = self.isso.conf.get("server", "samesite")
if isso_host_script.startswith("https://"):
secure = True
samesite = samesite or "None"
else:
secure = False
samesite = samesite or "Lax"
return functools.partial(dump_cookie, **kwargs,
secure=secure, samesite=samesite)
"""
@api {get} /id/:id view
@apiGroup Comment
@apiName view
@apiVersion 0.12.6
@apiDescription
View an existing comment, for the purpose of editing. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
@apiParam {Number} id
The id of the comment to view.
@apiUse plainParam
@apiExample {curl} View the comment with id 4:
curl 'https://comments.example.com/id/4' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample Example result:
{
"website": null,
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 1
}
"""
def view(self, environ, request, id):
rv = self.comments.get(id)
if rv is None:
raise NotFound
try:
self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
if request.args.get('plain', '0') == '0':
rv['text'] = self.isso.render(rv['text'])
return JSON(rv, 200)
"""
@api {put} /id/:id edit
@apiGroup Comment
@apiName edit
@apiVersion 0.12.6
@apiDescription
Edit an existing comment. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details. Editing a comment will set a new edit cookie in the response.
@apiUse csrf
@apiParam {Number} id
The id of the comment to edit.
@apiBody {String{3...65535}} text
A new (raw) text for the comment.
@apiBody {String} [author]
The modified comment’s author’s name.
@apiBody {String{...254}} [website]
The modified comment’s author’s website. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiExample {curl} Edit comment with id 23:
curl -X PUT 'https://comments.example.com/id/23' -d {"text": "I see your point. However, I still disagree.", "website": "maxrant.important.com"} -H 'Content-Type: application/json' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Example response:
HTTP/1.1 200 OK
{
"website": "maxrant.important.com",
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>I see your point. However, I still disagree.</p>",
"dislikes": 0,
"modified": 1464943439.073961,
"mode": 1,
"id": 23,
"likes": 0
}
"""
@xhr
def edit(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
data = request.json
if data.get("text") is None or len(data["text"]) < 3:
raise BadRequest("no text given")
for key in set(data.keys()) - set(["text", "author", "website"]):
data.pop(key)
data['modified'] = time.time()
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
resp = JSON(rv, 200)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
"""
@api {delete} /id/:id delete
@apiGroup Comment
@apiName delete
@apiVersion 0.12.6
@apiDescription
Delete an existing comment. Deleting a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
Returns either `null` or a comment with an empty text value when the comment is still referenced by other comments.
@apiUse csrf
@apiParam {Number} id
Id of the comment to delete.
@apiExample {curl} Delete comment with id 14:
curl -X DELETE 'https://comments.example.com/id/14' -b cookie.txt
@apiSuccessExample Successful deletion returns null and deletes cookie:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
null
@apiSuccessExample {json} Comment still referenced by another:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
{
"id": 14,
"parent": null,
"created": 1653432621.0512516,
"modified": 1653434488.571937,
"mode": 4,
"text": "",
"author": null,
"website": null,
"likes": 0,
"dislikes": 0,
"notification": 0
}
"""
@xhr
def delete(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ""))
except (SignatureExpired, BadSignature):
raise Forbidden
else:
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
with self.isso.lock:
rv = self.comments.delete(id)
if rv:
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.delete", id)
resp = JSON(rv, 200)
cookie = self.create_cookie(expires=0, max_age=0)
resp.headers.add("Set-Cookie", cookie(str(id)))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % id))
return resp
"""
@api {get} /id/:id/unsubscribe/:email/:key unsubscribe
@apiGroup Comment
@apiName unsubscribe
@apiVersion 0.12.6
@apiDescription
Opt out from getting any further email notifications about replies to a particular comment. In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by isso.
@apiParam {Number} id
The id of the comment to unsubscribe from replies to.
@apiParam {String} email
The email address of the subscriber.
@apiParam {String} key
The key to authenticate the subscriber.
@apiExample {curl} Unsubscribe Alice from replies to comment with id 13:
curl -X GET 'https://comments.example.com/id/13/unsubscribe/[email protected]/WyJ1bnN1YnNjcmliZSIsImFsaWNlQGV4YW1wbGUuY29tIl0.DdcH9w.Wxou-l22ySLFkKUs7RUHnoM8Kos'
@apiSuccessExample {html} Using GET:
<!DOCTYPE html>
<html>
<head>Successfully unsubscribed</head>
<body>
<p>You have been unsubscribed from replies in the given conversation.</p>
</body>
</html>
"""
def unsubscribe(self, environ, request, id, email, key):
email = unquote(email)
try:
rv = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
if not isinstance(rv, list) or len(rv) != 2:
raise Forbidden
if rv[0] != 'unsubscribe' or rv[1] != email:
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
with self.isso.lock:
self.comments.unsubscribe(email, id)
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
" <title>Successfully unsubscribed</title>"
"</head>"
"<body>"
" <p>You have been unsubscribed from replies in the given conversation.</p>"
"</body>"
"</html>")
return Response(modal, 200, content_type="text/html")
"""
@api {post} /id/:id/:action/:key moderate
@apiGroup Comment
@apiName moderate
@apiVersion 0.12.6
@apiDescription
Publish or delete a comment that is in the moderation queue (mode `2`). In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by Isso or provided in the admin interface.
This endpoint can also be used with a `GET` request. In that case, a html page is returned that asks the user whether they are sure to perform the selected action. If they select “yes”, the query is repeated using `POST`.
@apiParam {Number} id
The id of the comment to moderate.
@apiParam {String=activate,edit,delete} action
- `activate` to publish the comment (change its mode to `1`).
- `edit`: Send `text`, `author`, `email` and `website` via `POST`.
To be used from the admin interface. Better use the `edit` `PUT` endpoint.
- `delete` to delete the comment.
@apiParam {String} key
The moderation key to authenticate the moderation.
@apiExample {curl} delete comment with id 13:
curl -X POST 'https://comments.example.com/id/13/delete/MTM.CjL6Fg.REIdVXa-whJS_x8ojQL4RrXnuF4'
@apiSuccessExample {html} Request deletion using GET:
<!DOCTYPE html>
<html>
<head>
<script>
if (confirm('Delete: Are you sure?')) {
xhr = new XMLHttpRequest;
xhr.open('POST', window.location.href);
xhr.send(null);
xhr.onload = function() {
window.location.href = "https://example.com/example-thread/#isso-13";
};
}
</script>
@apiSuccessExample Delete using POST:
Comment has been deleted
@apiSuccessExample Activate using POST:
Comment has been activated
"""
def moderate(self, environ, request, id, action, key):
try:
id = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
thread = self.threads.get(item['tid'])
link = local("origin") + thread["uri"] + "#isso-%i" % item["id"]
if request.method == "GET":
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
"<script>"
" if (confirm('%s: Are you sure?')) {"
" xhr = new XMLHttpRequest;"
" xhr.open('POST', window.location.href);"
" xhr.send(null);"
" xhr.onload = function() {"
" window.location.href = %s;"
" };"
" }"
"</script>" % (action.capitalize(), json.dumps(link)))
return Response(modal, 200, content_type="text/html")
if action == "activate":
if item['mode'] == 1:
return Response("Already activated", 200)
with self.isso.lock:
self.comments.activate(id)
self.signal("comments.activate", thread, item)
return Response("Comment has been activated", 200)
elif action == "edit":
data = request.json
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
return JSON(rv, 200)
else:
with self.isso.lock:
self.comments.delete(id)
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
self.signal("comments.delete", id)
return Response("Comment has been deleted", 200)
"""
@api {get} / Get comments
@apiGroup Thread
@apiName fetch
@apiVersion 0.12.6
@apiDescription Queries the publicly visible comments of a thread.
@apiQuery {String} uri
The URI of thread to get the comments from.
@apiQuery {Number} [parent]
Return only comments that are children of the comment with the provided ID.
@apiUse plainParam
@apiQuery {Number} [limit]
The maximum number of returned top-level comments. Omit for unlimited results.
@apiQuery {Number} [nested_limit]
The maximum number of returned nested comments per comment. Omit for unlimited results.
@apiQuery {Number} [after]
Includes only comments were added after the provided UNIX timestamp.
@apiSuccess {Number} id
Id of the comment `replies` is the list of replies of. `null` for the list of top-level comments.
@apiSuccess {Number} total_replies
The number of replies if the `limit` parameter was not set. If `after` is set to `X`, this is the number of comments that were created after `X`. So setting `after` may change this value!
@apiSuccess {Number} hidden_replies
The number of comments that were omitted from the results because of the `limit` request parameter. Usually, this will be `total_replies` - `limit`.
@apiSuccess {Object[]} replies
The list of comments. Each comment also has the `total_replies`, `replies`, `id` and `hidden_replies` properties to represent nested comments.
@apiSuccess {Object[]} config
Object holding only the client configuration parameters that depend on server settings. Will be dropped in a future version of Isso. Use the dedicated `/config` endpoint instead.
@apiExample {curl} Get 2 comments with 5 responses:
curl 'https://comments.example.com/?uri=/thread/&limit=2&nested_limit=5'
@apiSuccessExample {json} Example response:
{
"total_replies": 14,
"replies": [
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.732863,
"text": "<p>Hello, World!</p>",
"total_replies": 1,
"hidden_replies": 0,
"dislikes": 2,
"modified": null,
"mode": 1,
"replies": [
{
"website": null,
"author": null,
"parent": 1,
"created": 1464818460.769638,
"text": "<p>Hi, now some Markdown: <em>Italic</em>, <strong>bold</strong>, <code>monospace</code>.</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "2af4e1a6c96a",
"id": 2,
"likes": 2
}
],
"hash": "1cb6cc0309a2",
"id": 1,
"likes": 2
},
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.80574,
"text": "<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Accusantium at commodi cum deserunt dolore, error fugiat harum incidunt, ipsa ipsum mollitia nam provident rerum sapiente suscipit tempora vitae? Est, qui?</p>",
"total_replies": 0,
"hidden_replies": 0,
"dislikes": 0,
"modified": null,
"mode": 1,
"replies": [],
"hash": "1cb6cc0309a2",
"id": 3,
"likes": 0
},
"id": null,
"hidden_replies": 12
}
"""
@requires(str, 'uri')
def fetch(self, environ, request, uri):
args = {
'uri': uri,
'after': request.args.get('after', 0)
}
try:
args['limit'] = int(request.args.get('limit'))
except TypeError:
args['limit'] = None
except ValueError:
return BadRequest("limit should be integer")
if request.args.get('parent') is not None:
try:
args['parent'] = int(request.args.get('parent'))
root_id = args['parent']
except ValueError:
return BadRequest("parent should be integer")
else:
args['parent'] = None
root_id = None
plain = request.args.get('plain', '0') == '0'
reply_counts = self.comments.reply_count(uri, after=args['after'])
if args['limit'] == 0:
root_list = []
else:
root_list = list(self.comments.fetch(**args))
if root_id not in reply_counts:
reply_counts[root_id] = 0
try:
nested_limit = int(request.args.get('nested_limit'))
except TypeError:
nested_limit = None
except ValueError:
return BadRequest("nested_limit should be integer")
rv = {
'id': root_id,
'total_replies': reply_counts[root_id],
'hidden_replies': reply_counts[root_id] - len(root_list),
'replies': self._process_fetched_list(root_list, plain),
'config': self.public_conf
}
# We are only checking for one level deep comments
if root_id is None:
for comment in rv['replies']:
if comment['id'] in reply_counts:
comment['total_replies'] = reply_counts[comment['id']]
if nested_limit is not None:
if nested_limit > 0:
args['parent'] = comment['id']
args['limit'] = nested_limit
replies = list(self.comments.fetch(**args))
else:
replies = []
else:
args['parent'] = comment['id']
replies = list(self.comments.fetch(**args))
else:
comment['total_replies'] = 0
replies = []
comment['hidden_replies'] = comment['total_replies'] - \
len(replies)
comment['replies'] = self._process_fetched_list(replies, plain)
return JSON(rv, 200)
def _add_gravatar_image(self, item):
if not self.conf.getboolean('gravatar'):
return item
email = item['email'] or item['author'] or ''
email_md5_hash = md5(email)
gravatar_url = self.conf.get('gravatar-url')
item['gravatar_image'] = gravatar_url.format(email_md5_hash)
return item
def _process_fetched_list(self, fetched_list, plain=False):
for item in fetched_list:
key = item['email'] or item['remote_addr']
val = self.cache.get('hash', key.encode('utf-8'))
if val is None:
val = self.hash(key)
self.cache.set('hash', key.encode('utf-8'), val)
item['hash'] = val
item = self._add_gravatar_image(item)
for key in set(item.keys()) - API.FIELDS:
item.pop(key)
if plain:
for item in fetched_list:
item['text'] = self.isso.render(item['text'])
return fetched_list
"""
@apiDefine likeResponse
@apiSuccess {Number} likes
The (new) number of likes on the comment.
@apiSuccess {Number} dislikes
The (new) number of dislikes on the comment.
@apiSuccessExample Return updated vote counts:
{
"likes": 4,
"dislikes": 3
}
"""
"""
@api {post} /id/:id/like like
@apiGroup Comment
@apiName like
@apiVersion 0.12.6
@apiDescription
Puts a “like” on a comment. The author of a comment cannot like their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to like.
@apiExample {curl} Like comment with id 23:
curl -X POST 'https://comments.example.com/id/23/like'
@apiUse likeResponse
"""
@xhr
def like(self, environ, request, id):
nv = self.comments.vote(True, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /id/:id/dislike dislike
@apiGroup Comment
@apiName dislike
@apiVersion 0.12.6
@apiDescription
Puts a “dislike” on a comment. The author of a comment cannot dislike their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to dislike.
@apiExample {curl} Dislike comment with id 23:
curl -X POST 'https://comments.example.com/id/23/dislike'
@apiUse likeResponse
"""
@xhr
def dislike(self, environ, request, id):
nv = self.comments.vote(False, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /preview preview
@apiGroup Comment
@apiName preview
@apiVersion 0.12.6
@apiDescription
Render comment text using markdown.
@apiBody {String{3...65535}} text
(Raw) comment text
@apiSuccess {String} text
Rendered comment text
@apiExample {curl} Preview comment:
curl -X POST 'https://comments.example.com/preview' -d '{"text": "A sample comment"}'
@apiSuccessExample {json} Rendered comment:
{
"text": "<p>A sample comment</p>"
}
"""
def preview(self, environment, request):
data = request.json
if data.get("text", None) is None:
raise BadRequest("no text given")
return JSON({'text': self.isso.render(data["text"])}, 200)
"""
@api {post} /count Count comments
@apiGroup Thread
@apiName counts
@apiVersion 0.12.6
@apiDescription
Counts the number of comments on multiple threads. The requestor provides a list of thread uris. The number of comments on each thread is returned as a list, in the same order as the threads were requested. The counts include comments that are responses to comments, but only published comments (i.e. exclusing comments pending moderation).
@apiBody {Number[]} urls
Array of URLs for which to fetch comment counts
@apiExample {curl} Get the respective counts of 5 threads:
curl -X POST 'https://comments.example.com/count' -d '["/blog/firstPost.html", "/blog/controversalPost.html", "/blog/howToCode.html", "/blog/boringPost.html", "/blog/isso.html"]
@apiSuccessExample {json} Counts of 5 threads:
[2, 18, 4, 0, 3]
"""
def counts(self, environ, request):
data = request.json
if not isinstance(data, list) and not all(isinstance(x, str) for x in data):
raise BadRequest("JSON must be a list of URLs")
return JSON(self.comments.count(*data), 200)
"""
@api {get} /feed Atom feed for comments
@apiGroup Thread
@apiName feed
@apiVersion 0.12.6
@apiDescription
Provide an Atom feed for the given thread. Only available if `[rss] base` is set in server config. By default, up to 100 comments are returned.
@apiQuery {String} uri
The uri of the thread to display a feed for
@apiExample {curl} Get an Atom feed for /thread/foo in XML format:
curl 'https://comments.example.com/feed?uri=/thread/foo'
@apiSuccessExample Atom feed for /thread/foo:
<?xml version='1.0' encoding='utf-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:thr="http://purl.org/syndication/thread/1.0">
<updated>2022-05-24T20:38:04.032789Z</updated>
<id>tag:example.com,2018:/isso/thread/thread/foo</id>
<title>Comments for example.com/thread/foo</title>
<entry>
<id>tag:example.com,2018:/isso/1/2</id>
<title>Comment #2</title>
<updated>2022-05-24T20:38:04.032789Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-2" />
<content type="html"><p>And another</p></content>
</entry>
<entry>
<id>tag:example.com,2018:/isso/1/1</id>
<title>Comment #1</title>
<updated>2022-05-24T20:38:00.837703Z</updated>
<author>
<name>Jane Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-1" />
<content type="html"><p>A sample comment</p></content>
</entry>
</feed>
"""
@requires(str, 'uri')
def feed(self, environ, request, uri):
conf = self.isso.conf.section("rss")
if not conf.get('base'):
raise NotFound
args = {
'uri': uri,
'order_by': 'id',
'asc': 0,
'limit': conf.getint('limit')
}
try:
args['limit'] = max(int(request.args.get('limit')), args['limit'])
except TypeError:
pass
except ValueError:
return BadRequest("limit should be integer")
comments = self.comments.fetch(**args)
base = conf.get('base').rstrip('/')
hostname = urlparse(base).netloc
# Let's build an Atom feed.
# RFC 4287: https://tools.ietf.org/html/rfc4287
# RFC 4685: https://tools.ietf.org/html/rfc4685 (threading extensions)
# For IDs: http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id
feed = ET.Element('feed', {
'xmlns': 'http://www.w3.org/2005/Atom',
'xmlns:thr': 'http://purl.org/syndication/thread/1.0'
})
# For feed ID, we would use thread ID, but we may not have
# one. Therefore, we use the URI. We don't have a year
# either...
id = ET.SubElement(feed, 'id')
id.text = 'tag:{hostname},2018:/isso/thread{uri}'.format(
hostname=hostname, uri=uri)
# For title, we don't have much either. Be pretty generic.
title = ET.SubElement(feed, 'title')
title.text = 'Comments for {hostname}{uri}'.format(
hostname=hostname, uri=uri)
comment0 = None
for comment in comments:
if comment0 is None:
comment0 = comment
entry = ET.SubElement(feed, 'entry')
# We don't use a real date in ID either to help with
# threading.
id = ET.SubElement(entry, 'id')
id.text = 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['id'])
title = ET.SubElement(entry, 'title')
title.text = 'Comment #{}'.format(comment['id'])
updated = ET.SubElement(entry, 'updated')
updated.text = '{}Z'.format(datetime.fromtimestamp(
comment['modified'] or comment['created']).isoformat())
author = ET.SubElement(entry, 'author')
name = ET.SubElement(author, 'name')
name.text = comment['author']
ET.SubElement(entry, 'link', {
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['id'])
})
content = ET.SubElement(entry, 'content', {
'type': 'html',
})
content.text = self.isso.render(comment['text'])
if comment['parent']:
ET.SubElement(entry, 'thr:in-reply-to', {
'ref': 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['parent']),
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['parent'])
})
# Updated is mandatory. If we have comments, we use the date
# of last modification of the first one (which is the last
# one). Otherwise, we use a fixed date.
updated = ET.Element('updated')
if comment0 is None:
updated.text = '1970-01-01T01:00:00Z'
else:
updated.text = datetime.fromtimestamp(
comment0['modified'] or comment0['created']).isoformat()
updated.text += 'Z'
feed.insert(0, updated)
output = StringIO()
ET.ElementTree(feed).write(output,
encoding='utf-8',
xml_declaration=True)
response = XML(output.getvalue(), 200)
# Add an etag/last-modified value for caching purpose
if comment0 is None:
response.set_etag('empty')
response.last_modified = 0
else:
response.set_etag('{tid}-{id}'.format(**comment0))
response.last_modified = comment0['modified'] or comment0['created']
return response.make_conditional(request)
"""
@api {get} /config Fetch client config
@apiGroup Thread
@apiName config
@apiVersion 0.13.0
@apiDescription
Returns only the client configuration parameters that depend on server settings.
@apiSuccess {Object[]} config
The client configuration object.
@apiSuccess {Boolean} config.reply-to-self
Commenters can reply to their own comments.
@apiSuccess {Boolean} config.require-author
Commenters must enter valid Name.
@apiSuccess {Boolean} config.require-email
Commenters must enter valid email.
@apiSuccess {Boolean} config.reply-notifications
Enable reply notifications via E-mail.
@apiSuccess {Boolean} config.gravatar
Load images from Gravatar service instead of generating them. Also disables regular avatars (see below).
@apiSuccess {Boolean} config.avatar
To avoid having both regular avatars and Gravatars side-by-side,
setting `gravatar` will disable regular avatars. The `avatar` key will
only be sent by the server if `gravatar` is set.
@apiSuccess {Boolean} config.feed
Enable or disable the addition of a link to the feed for the comment
thread.
@apiExample {curl} get the client config:
curl 'https://comments.example.com/config'
@apiSuccessExample {json} Client config:
{
"config": {
"reply-to-self": false,
"require-email": false,
"require-author": false,
"reply-notifications": false,
"gravatar": true,
"avatar": false,
"feed": false
}
}
"""
def config(self, environment, request):
rv = {'config': self.public_conf}
return JSON(rv, 200)
"""
@api {get} /demo/ Isso demo page
@apiGroup Demo
@apiName demo
@apiVersion 0.13.0
@apiPrivate
@apiDescription
Displays a demonstration of Isso with a thread counter and comment widget.
@apiExample {curl} Get demo page
curl 'https://comments.example.com/demo/'
@apiSuccessExample {html} Demo page:
<!DOCTYPE html>
<head>
<title>Isso Demo</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div id="page">
<div id="wrapper" style="max-width: 900px; margin-left: auto; margin-right: auto;">
<h2><a href=".">Isso Demo</a></h2>
<script src="../js/embed.dev.js" data-isso="../" ></script>
<section>
<p>This is a link to a thead, which will display a comment counter:
<a href=".#isso-thread">How many Comments?</a></p>
<p>Below is the actual comment field.</p>
</section>
<section id="isso-thread" data-title="Isso Test"><noscript>Javascript needs to be activated to view comments.</noscript></section>
</div>
</div>
</body>
"""
def demo(self, env, req):
index = pkg_resources.resource_filename('isso', 'demo/index.html')
return send_from_directory(os_path.dirname(index), 'index.html', env)
"""
@api {post} /login/ Log in
@apiGroup Admin
@apiName login
@apiVersion 0.12.6
@apiPrivate
@apiDescription
Log in to admin, will redirect to `/admin/` on success. Must use form data, not `POST` JSON.
@apiBody {String} password
The admin password as set in `[admin] password` in the server config.
@apiExample {curl} Log in
curl -X POST 'https://comments.example.com/login' -F "password=strong_default_password_for_isso_admin" -c cookie.txt
@apiSuccessExample {html} Login successful:
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="https://comments.example.com/admin/">https://comments.example.com/admin/</a>. If not, click the link.
"""
def login(self, env, req):
if not self.isso.conf.getboolean("admin", "enabled"):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('disabled.html', isso_host_script=isso_host_script)
data = req.form
password = self.isso.conf.get("admin", "password")
if data['password'] and data['password'] == password:
response = redirect(re.sub(
r'/login/$',
'/admin/',
get_current_url(env, strip_querystring=True)
))
cookie = self.create_cookie(value=self.isso.sign({"logged": True}),
expires=datetime.now() + timedelta(1))
response.headers.add("Set-Cookie", cookie("admin-session"))
response.headers.add("X-Set-Cookie", cookie("isso-admin-session"))
return response
else:
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('login.html', isso_host_script=isso_host_script)
"""
@api {get} /admin/ Admin interface
@apiGroup Admin
@apiName admin
@apiVersion 0.12.6
@apiPrivate
@apiPermission admin
@apiDescription
Display an admin interface from which to manage comments. Will redirect to `/login` if not already logged in.
@apiQuery {Number} [page=0]
Page number
@apiQuery {Number{1,2,4}} [mode=2]
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiQuery {String{id,created,modified,likes,dislikes,tid}} [order_by=created]
Comment ordering
@apiQuery {Number{0,1}} [asc=0]
Ascending
@apiExample {curl} Listing of published comments:
curl 'https://comments.example.com/admin/?mode=1&page=0&order_by=modified&asc=1' -b cookie.txt
"""
def admin(self, env, req):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
if not self.isso.conf.getboolean("admin", "enabled"):
return render_template('disabled.html', isso_host_script=isso_host_script)
try:
data = self.isso.unsign(req.cookies.get('admin-session', ''),
max_age=60 * 60 * 24)
except BadSignature:
return render_template('login.html', isso_host_script=isso_host_script)
if not data or not data['logged']:
return render_template('login.html', isso_host_script=isso_host_script)
page_size = 100
page = int(req.args.get('page', 0))
order_by = req.args.get('order_by', 'created')
asc = int(req.args.get('asc', 0))
mode = int(req.args.get('mode', 2))
comments = self.comments.fetchall(mode=mode, page=page,
limit=page_size,
order_by=order_by,
asc=asc)
comments_enriched = []
for comment in list(comments):
comment['hash'] = self.isso.sign(comment['id'])
comments_enriched.append(comment)
comment_mode_count = self.comments.count_modes()
max_page = int(sum(comment_mode_count.values()) / 100)
return render_template('admin.html', comments=comments_enriched,
page=int(page), mode=int(mode),
conf=self.conf, max_page=max_page,
counts=comment_mode_count,
order_by=order_by, asc=asc,
isso_host_script=isso_host_script)
"""
@api {get} /latest latest
@apiGroup Comment
@apiName latest
@apiVersion 0.12.6
@apiDescription
Get the latest comments from the system, no matter which thread. Only available if `[general] latest-enabled` is set to `true` in server config.
@apiQuery {Number} limit
The quantity of last comments to retrieve
@apiExample {curl} Get the latest 5 comments
curl 'https://comments.example.com/latest?limit=5'
@apiUse commentResponse
@apiSuccessExample Example result:
[
{
"website": null,
"uri": "/some",
"author": null,
"parent": null,
"created": 1464912312.123416,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 3,
"likes": 1
},
{
"website": null,
"uri": "/other",
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 0
}
]
"""
def latest(self, environ, request):
# if the feature is not allowed, don't present the endpoint
if not self.conf.getboolean("latest-enabled"):
return NotFound(
"Unavailable because 'latest-enabled' not set by site admin"
)
# get and check the limit
bad_limit_msg = "Query parameter 'limit' is mandatory (integer, >0)"
try:
limit = int(request.args['limit'])
except (KeyError, ValueError):
return BadRequest(bad_limit_msg)
if limit <= 0:
return BadRequest(bad_limit_msg)
# retrieve the latest N comments from the DB
all_comments_gen = self.comments.fetchall(limit=None, order_by='created', mode='1')
comments = collections.deque(all_comments_gen, maxlen=limit)
# prepare a special set of fields (except text which is rendered specifically)
fields = {
'author',
'created',
'dislikes',
'id',
'likes',
'mode',
'modified',
'parent',
'text',
'uri',
'website',
}
# process the retrieved comments and build results
result = []
for comment in comments:
processed = {key: comment[key] for key in fields}
processed['text'] = self.isso.render(comment['text'])
result.append(processed)
return JSON(result, 200)
| # -*- encoding: utf-8 -*-
import collections
import re
import time
import functools
import json # json.dumps to put URL in <script>
import pkg_resources
from configparser import NoOptionError
from datetime import datetime, timedelta
from html import escape
from io import BytesIO as StringIO
from os import path as os_path
from urllib.parse import unquote, urlparse
from xml.etree import ElementTree as ET
from itsdangerous import SignatureExpired, BadSignature
from werkzeug.exceptions import BadRequest, Forbidden, NotFound
from werkzeug.http import dump_cookie
from werkzeug.routing import Rule
from werkzeug.utils import redirect, send_from_directory
from werkzeug.wrappers import Response
from werkzeug.wsgi import get_current_url
from isso import utils, local
from isso.utils import (http, parse,
JSONResponse as JSON, XMLResponse as XML,
render_template)
from isso.utils.hash import md5, sha1
from isso.views import requires
# from Django apparently, looks good to me *duck*
__url_re = re.compile(
r'^'
r'(https?://)?'
# domain...
r'(?:(?:[\w](?:[\w-]{0,61}[\w])?\.)+(?:[\w]{2,6}\.?|[\w-]{2,}\.?)|'
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)'
r'$', re.IGNORECASE | re.UNICODE)
def isurl(text):
return __url_re.match(text) is not None
def normalize(url):
if not url.startswith(("http://", "https://")):
return "http://" + url
return url
def xhr(func):
"""A decorator to check for CSRF on POST/PUT/DELETE using a <form>
element and JS to execute automatically (see #40 for a proof-of-concept).
When an attacker uses a <form> to downvote a comment, the browser *should*
add a `Content-Type: ...` header with three possible values:
* application/x-www-form-urlencoded
* multipart/form-data
* text/plain
If the header is not sent or requests `application/json`, the request is
not forged (XHR is restricted by CORS separately).
"""
"""
@apiDefine csrf
@apiHeader {String="application/json"} Content-Type
The content type must be set to `application/json` to prevent CSRF attacks.
"""
def dec(self, env, req, *args, **kwargs):
if req.content_type and not req.content_type.startswith("application/json"):
raise Forbidden("CSRF")
return func(self, env, req, *args, **kwargs)
return dec
class API(object):
FIELDS = set(['id', 'parent', 'text', 'author', 'website',
'mode', 'created', 'modified', 'likes', 'dislikes', 'hash', 'gravatar_image', 'notification'])
# comment fields, that can be submitted
ACCEPT = set(['text', 'author', 'website', 'email', 'parent', 'title', 'notification'])
VIEWS = [
('fetch', ('GET', '/')),
('new', ('POST', '/new')),
('counts', ('POST', '/count')),
('feed', ('GET', '/feed')),
('latest', ('GET', '/latest')),
('view', ('GET', '/id/<int:id>')),
('edit', ('PUT', '/id/<int:id>')),
('delete', ('DELETE', '/id/<int:id>')),
('unsubscribe', ('GET', '/id/<int:id>/unsubscribe/<string:email>/<string:key>')),
('moderate', ('GET', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('moderate', ('POST', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('like', ('POST', '/id/<int:id>/like')),
('dislike', ('POST', '/id/<int:id>/dislike')),
('demo', ('GET', '/demo/')),
('preview', ('POST', '/preview')),
('config', ('GET', '/config')),
('login', ('POST', '/login/')),
('admin', ('GET', '/admin/'))
]
def __init__(self, isso, hasher):
self.isso = isso
self.hash = hasher.uhash
self.cache = isso.cache
self.signal = isso.signal
self.conf = isso.conf.section("general")
self.moderated = isso.conf.getboolean("moderation", "enabled")
# this is similar to the wordpress setting "Comment author must have a previously approved comment"
try:
self.approve_if_email_previously_approved = isso.conf.getboolean("moderation", "approve-if-email-previously-approved")
except NoOptionError:
self.approve_if_email_previously_approved = False
try:
self.trusted_proxies = list(isso.conf.getiter("server", "trusted-proxies"))
except NoOptionError:
self.trusted_proxies = []
# These configuration records can be read out by client
self.public_conf = {}
self.public_conf["reply-to-self"] = isso.conf.getboolean("guard", "reply-to-self")
self.public_conf["require-email"] = isso.conf.getboolean("guard", "require-email")
self.public_conf["require-author"] = isso.conf.getboolean("guard", "require-author")
self.public_conf["reply-notifications"] = isso.conf.getboolean("general", "reply-notifications")
self.public_conf["gravatar"] = isso.conf.getboolean("general", "gravatar")
if self.public_conf["gravatar"]:
self.public_conf["avatar"] = False
self.public_conf["feed"] = False
rss = isso.conf.section("rss")
if rss and rss.get('base'):
self.public_conf["feed"] = True
self.guard = isso.db.guard
self.threads = isso.db.threads
self.comments = isso.db.comments
for (view, (method, path)) in self.VIEWS:
isso.urls.add(
Rule(path, methods=[method], endpoint=getattr(self, view)))
@classmethod
def verify(cls, comment):
if comment.get("text") is None:
return False, "text is missing"
if not isinstance(comment.get("parent"), (int, type(None))):
return False, "parent must be an integer or null"
for key in ("text", "author", "website", "email"):
if not isinstance(comment.get(key), (str, type(None))):
return False, "%s must be a string or null" % key
if len(comment["text"].rstrip()) < 3:
return False, "text is too short (minimum length: 3)"
if len(comment["text"]) > 65535:
return False, "text is too long (maximum length: 65535)"
if len(comment.get("email") or "") > 254:
return False, "http://tools.ietf.org/html/rfc5321#section-4.5.3"
if comment.get("website"):
if len(comment["website"]) > 254:
return False, "arbitrary length limit"
if not isurl(comment["website"]):
return False, "Website not Django-conform"
return True, ""
# Common definitions for apidoc follow:
"""
@apiDefine plainParam
@apiQuery {Number=0,1} [plain=0]
If set to `1`, the plain text entered by the user will be returned in the comments’ `text` attribute (instead of the rendered markdown).
"""
"""
@apiDefine commentResponse
@apiSuccess {Number} id
The comment’s id (assigned by the server).
@apiSuccess {Number} parent
Id of the comment this comment is a reply to. `null` if this is a top-level-comment.
@apiSuccess {Number=1,2,4} mode
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiSuccess {String} author
The comments’s author’s name or `null`.
@apiSuccess {String} website
The comment’s author’s website or `null`.
@apiSuccess {String} hash
A hash uniquely identifying the comment’s author.
@apiSuccess {Number} created
UNIX timestamp of the time the comment was created (on the server).
@apiSuccess {Number} modified
UNIX timestamp of the time the comment was last modified (on the server). `null` if the comment was not yet modified.
"""
"""
@apiDefine admin Admin access needed
Only available to a logged-in site admin. Requires a valid `admin-session` cookie.
"""
"""
@api {post} /new create new
@apiGroup Comment
@apiName new
@apiVersion 0.12.6
@apiDescription
Creates a new comment. The server issues a cookie per new comment which acts as
an authentication token to modify or delete the comment.
The token is cryptographically signed and expires automatically after 900 seconds (=15min) by default.
@apiUse csrf
@apiQuery {String} uri
The uri of the thread to create the comment on.
@apiBody {String{3...65535}} text
The comment’s raw text.
@apiBody {String} [author]
The comment’s author’s name.
@apiBody {String{...254}} [email]
The comment’s author’s email address.
@apiBody {String{...254}} [website]
The comment’s author’s website’s url. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiBody {Number} [parent]
The parent comment’s id if the new comment is a response to an existing comment.
@apiExample {curl} Create a reply to comment with id 15:
curl 'https://comments.example.com/new?uri=/thread/' -d '{"text": "Stop saying that! *isso*!", "author": "Max Rant", "email": "[email protected]", "parent": 15}' -H 'Content-Type: application/json' -c cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Success after the above request:
HTTP/1.1 201 CREATED
Set-Cookie: 1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
X-Set-Cookie: isso-1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
{
"website": null,
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>Stop saying that! <em>isso</em>!</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "e644f6ee43c0",
"id": 23,
"likes": 0
}
"""
@xhr
@requires(str, 'uri')
def new(self, environ, request, uri):
data = request.json
for field in set(data.keys()) - API.ACCEPT:
data.pop(field)
for key in ("author", "email", "website", "parent"):
data.setdefault(key, None)
valid, reason = API.verify(data)
if not valid:
return BadRequest(reason)
for field in ("author", "email", "website"):
if data.get(field) is not None:
data[field] = escape(data[field], quote=False)
if data.get("website"):
data["website"] = normalize(data["website"])
data['mode'] = 2 if self.moderated else 1
data['remote_addr'] = self._remote_addr(request)
with self.isso.lock:
if uri not in self.threads:
if not data.get('title'):
with http.curl('GET', local("origin"), uri) as resp:
if resp and resp.status == 200:
uri, title = parse.thread(resp.read(), id=uri)
else:
return NotFound('URI does not exist %s')
else:
title = data['title']
thread = self.threads.new(uri, title)
self.signal("comments.new:new-thread", thread)
else:
thread = self.threads[uri]
# notify extensions that the new comment is about to save
self.signal("comments.new:before-save", thread, data)
valid, reason = self.guard.validate(uri, data)
if not valid:
self.signal("comments.new:guard", reason)
raise Forbidden(reason)
with self.isso.lock:
# if email-based auto-moderation enabled, check for previously approved author
# right before approval.
if self.approve_if_email_previously_approved and self.comments.is_previously_approved_author(data['email']):
data['mode'] = 1
rv = self.comments.add(uri, data)
# notify extension, that the new comment has been successfully saved
self.signal("comments.new:after-save", thread, rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
rv["hash"] = self.hash(rv['email'] or rv['remote_addr'])
self.cache.set(
'hash', (rv['email'] or rv['remote_addr']).encode('utf-8'), rv['hash'])
rv = self._add_gravatar_image(rv)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
# success!
self.signal("comments.new:finish", thread, rv)
resp = JSON(rv, 202 if rv["mode"] == 2 else 201)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
def _remote_addr(self, request):
"""Return the anonymized IP address of the requester.
Takes into consideration a potential X-Forwarded-For HTTP header
if a necessary server.trusted-proxies configuration entry is set.
Recipe source: https://stackoverflow.com/a/22936947/636849
"""
remote_addr = request.remote_addr
if self.trusted_proxies:
route = request.access_route + [remote_addr]
remote_addr = next((addr for addr in reversed(route)
if addr not in self.trusted_proxies), remote_addr)
return utils.anonymize(str(remote_addr))
def create_cookie(self, **kwargs):
"""
Setting cookies to SameSite=None requires "Secure" attribute.
For http-only, we need to override the dump_cookie() default SameSite=None
or the cookie will be rejected.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite#samesitenone_requires_secure
"""
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
samesite = self.isso.conf.get("server", "samesite")
if isso_host_script.startswith("https://"):
secure = True
samesite = samesite or "None"
else:
secure = False
samesite = samesite or "Lax"
return functools.partial(dump_cookie, **kwargs,
secure=secure, samesite=samesite)
"""
@api {get} /id/:id view
@apiGroup Comment
@apiName view
@apiVersion 0.12.6
@apiDescription
View an existing comment, for the purpose of editing. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
@apiParam {Number} id
The id of the comment to view.
@apiUse plainParam
@apiExample {curl} View the comment with id 4:
curl 'https://comments.example.com/id/4' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample Example result:
{
"website": null,
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 1
}
"""
def view(self, environ, request, id):
rv = self.comments.get(id)
if rv is None:
raise NotFound
try:
self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
if request.args.get('plain', '0') == '0':
rv['text'] = self.isso.render(rv['text'])
return JSON(rv, 200)
"""
@api {put} /id/:id edit
@apiGroup Comment
@apiName edit
@apiVersion 0.12.6
@apiDescription
Edit an existing comment. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details. Editing a comment will set a new edit cookie in the response.
@apiUse csrf
@apiParam {Number} id
The id of the comment to edit.
@apiBody {String{3...65535}} text
A new (raw) text for the comment.
@apiBody {String} [author]
The modified comment’s author’s name.
@apiBody {String{...254}} [website]
The modified comment’s author’s website. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiExample {curl} Edit comment with id 23:
curl -X PUT 'https://comments.example.com/id/23' -d {"text": "I see your point. However, I still disagree.", "website": "maxrant.important.com"} -H 'Content-Type: application/json' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Example response:
HTTP/1.1 200 OK
{
"website": "maxrant.important.com",
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>I see your point. However, I still disagree.</p>",
"dislikes": 0,
"modified": 1464943439.073961,
"mode": 1,
"id": 23,
"likes": 0
}
"""
@xhr
def edit(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
data = request.json
if data.get("text") is None or len(data["text"]) < 3:
raise BadRequest("no text given")
for key in set(data.keys()) - set(["text", "author", "website"]):
data.pop(key)
data['modified'] = time.time()
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
resp = JSON(rv, 200)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
"""
@api {delete} /id/:id delete
@apiGroup Comment
@apiName delete
@apiVersion 0.12.6
@apiDescription
Delete an existing comment. Deleting a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
Returns either `null` or a comment with an empty text value when the comment is still referenced by other comments.
@apiUse csrf
@apiParam {Number} id
Id of the comment to delete.
@apiExample {curl} Delete comment with id 14:
curl -X DELETE 'https://comments.example.com/id/14' -b cookie.txt
@apiSuccessExample Successful deletion returns null and deletes cookie:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
null
@apiSuccessExample {json} Comment still referenced by another:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
{
"id": 14,
"parent": null,
"created": 1653432621.0512516,
"modified": 1653434488.571937,
"mode": 4,
"text": "",
"author": null,
"website": null,
"likes": 0,
"dislikes": 0,
"notification": 0
}
"""
@xhr
def delete(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ""))
except (SignatureExpired, BadSignature):
raise Forbidden
else:
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
with self.isso.lock:
rv = self.comments.delete(id)
if rv:
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.delete", id)
resp = JSON(rv, 200)
cookie = self.create_cookie(expires=0, max_age=0)
resp.headers.add("Set-Cookie", cookie(str(id)))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % id))
return resp
"""
@api {get} /id/:id/unsubscribe/:email/:key unsubscribe
@apiGroup Comment
@apiName unsubscribe
@apiVersion 0.12.6
@apiDescription
Opt out from getting any further email notifications about replies to a particular comment. In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by isso.
@apiParam {Number} id
The id of the comment to unsubscribe from replies to.
@apiParam {String} email
The email address of the subscriber.
@apiParam {String} key
The key to authenticate the subscriber.
@apiExample {curl} Unsubscribe Alice from replies to comment with id 13:
curl -X GET 'https://comments.example.com/id/13/unsubscribe/[email protected]/WyJ1bnN1YnNjcmliZSIsImFsaWNlQGV4YW1wbGUuY29tIl0.DdcH9w.Wxou-l22ySLFkKUs7RUHnoM8Kos'
@apiSuccessExample {html} Using GET:
<!DOCTYPE html>
<html>
<head>Successfully unsubscribed</head>
<body>
<p>You have been unsubscribed from replies in the given conversation.</p>
</body>
</html>
"""
def unsubscribe(self, environ, request, id, email, key):
email = unquote(email)
try:
rv = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
if not isinstance(rv, list) or len(rv) != 2:
raise Forbidden
if rv[0] != 'unsubscribe' or rv[1] != email:
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
with self.isso.lock:
self.comments.unsubscribe(email, id)
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
" <title>Successfully unsubscribed</title>"
"</head>"
"<body>"
" <p>You have been unsubscribed from replies in the given conversation.</p>"
"</body>"
"</html>")
return Response(modal, 200, content_type="text/html")
"""
@api {post} /id/:id/:action/:key moderate
@apiGroup Comment
@apiName moderate
@apiVersion 0.12.6
@apiDescription
Publish or delete a comment that is in the moderation queue (mode `2`). In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by Isso or provided in the admin interface.
This endpoint can also be used with a `GET` request. In that case, a html page is returned that asks the user whether they are sure to perform the selected action. If they select “yes”, the query is repeated using `POST`.
@apiParam {Number} id
The id of the comment to moderate.
@apiParam {String=activate,edit,delete} action
- `activate` to publish the comment (change its mode to `1`).
- `edit`: Send `text`, `author`, `email` and `website` via `POST`.
To be used from the admin interface. Better use the `edit` `PUT` endpoint.
- `delete` to delete the comment.
@apiParam {String} key
The moderation key to authenticate the moderation.
@apiExample {curl} delete comment with id 13:
curl -X POST 'https://comments.example.com/id/13/delete/MTM.CjL6Fg.REIdVXa-whJS_x8ojQL4RrXnuF4'
@apiSuccessExample {html} Request deletion using GET:
<!DOCTYPE html>
<html>
<head>
<script>
if (confirm('Delete: Are you sure?')) {
xhr = new XMLHttpRequest;
xhr.open('POST', window.location.href);
xhr.send(null);
xhr.onload = function() {
window.location.href = "https://example.com/example-thread/#isso-13";
};
}
</script>
@apiSuccessExample Delete using POST:
Comment has been deleted
@apiSuccessExample Activate using POST:
Comment has been activated
"""
def moderate(self, environ, request, id, action, key):
try:
id = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
thread = self.threads.get(item['tid'])
link = local("origin") + thread["uri"] + "#isso-%i" % item["id"]
if request.method == "GET":
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
"<script>"
" if (confirm('%s: Are you sure?')) {"
" xhr = new XMLHttpRequest;"
" xhr.open('POST', window.location.href);"
" xhr.send(null);"
" xhr.onload = function() {"
" window.location.href = %s;"
" };"
" }"
"</script>" % (action.capitalize(), json.dumps(link)))
return Response(modal, 200, content_type="text/html")
if action == "activate":
if item['mode'] == 1:
return Response("Already activated", 200)
with self.isso.lock:
self.comments.activate(id)
self.signal("comments.activate", thread, item)
return Response("Comment has been activated", 200)
elif action == "edit":
data = request.json
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
return JSON(rv, 200)
else:
with self.isso.lock:
self.comments.delete(id)
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
self.signal("comments.delete", id)
return Response("Comment has been deleted", 200)
"""
@api {get} / Get comments
@apiGroup Thread
@apiName fetch
@apiVersion 0.12.6
@apiDescription Queries the publicly visible comments of a thread.
@apiQuery {String} uri
The URI of thread to get the comments from.
@apiQuery {Number} [parent]
Return only comments that are children of the comment with the provided ID.
@apiUse plainParam
@apiQuery {Number} [limit]
The maximum number of returned top-level comments. Omit for unlimited results.
@apiQuery {Number} [nested_limit]
The maximum number of returned nested comments per comment. Omit for unlimited results.
@apiQuery {Number} [after]
Includes only comments were added after the provided UNIX timestamp.
@apiSuccess {Number} id
Id of the comment `replies` is the list of replies of. `null` for the list of top-level comments.
@apiSuccess {Number} total_replies
The number of replies if the `limit` parameter was not set. If `after` is set to `X`, this is the number of comments that were created after `X`. So setting `after` may change this value!
@apiSuccess {Number} hidden_replies
The number of comments that were omitted from the results because of the `limit` request parameter. Usually, this will be `total_replies` - `limit`.
@apiSuccess {Object[]} replies
The list of comments. Each comment also has the `total_replies`, `replies`, `id` and `hidden_replies` properties to represent nested comments.
@apiSuccess {Object[]} config
Object holding only the client configuration parameters that depend on server settings. Will be dropped in a future version of Isso. Use the dedicated `/config` endpoint instead.
@apiExample {curl} Get 2 comments with 5 responses:
curl 'https://comments.example.com/?uri=/thread/&limit=2&nested_limit=5'
@apiSuccessExample {json} Example response:
{
"total_replies": 14,
"replies": [
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.732863,
"text": "<p>Hello, World!</p>",
"total_replies": 1,
"hidden_replies": 0,
"dislikes": 2,
"modified": null,
"mode": 1,
"replies": [
{
"website": null,
"author": null,
"parent": 1,
"created": 1464818460.769638,
"text": "<p>Hi, now some Markdown: <em>Italic</em>, <strong>bold</strong>, <code>monospace</code>.</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "2af4e1a6c96a",
"id": 2,
"likes": 2
}
],
"hash": "1cb6cc0309a2",
"id": 1,
"likes": 2
},
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.80574,
"text": "<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Accusantium at commodi cum deserunt dolore, error fugiat harum incidunt, ipsa ipsum mollitia nam provident rerum sapiente suscipit tempora vitae? Est, qui?</p>",
"total_replies": 0,
"hidden_replies": 0,
"dislikes": 0,
"modified": null,
"mode": 1,
"replies": [],
"hash": "1cb6cc0309a2",
"id": 3,
"likes": 0
},
"id": null,
"hidden_replies": 12
}
"""
@requires(str, 'uri')
def fetch(self, environ, request, uri):
args = {
'uri': uri,
'after': request.args.get('after', 0)
}
try:
args['limit'] = int(request.args.get('limit'))
except TypeError:
args['limit'] = None
except ValueError:
return BadRequest("limit should be integer")
if request.args.get('parent') is not None:
try:
args['parent'] = int(request.args.get('parent'))
root_id = args['parent']
except ValueError:
return BadRequest("parent should be integer")
else:
args['parent'] = None
root_id = None
plain = request.args.get('plain', '0') == '0'
reply_counts = self.comments.reply_count(uri, after=args['after'])
if args['limit'] == 0:
root_list = []
else:
root_list = list(self.comments.fetch(**args))
if root_id not in reply_counts:
reply_counts[root_id] = 0
try:
nested_limit = int(request.args.get('nested_limit'))
except TypeError:
nested_limit = None
except ValueError:
return BadRequest("nested_limit should be integer")
rv = {
'id': root_id,
'total_replies': reply_counts[root_id],
'hidden_replies': reply_counts[root_id] - len(root_list),
'replies': self._process_fetched_list(root_list, plain),
'config': self.public_conf
}
# We are only checking for one level deep comments
if root_id is None:
for comment in rv['replies']:
if comment['id'] in reply_counts:
comment['total_replies'] = reply_counts[comment['id']]
if nested_limit is not None:
if nested_limit > 0:
args['parent'] = comment['id']
args['limit'] = nested_limit
replies = list(self.comments.fetch(**args))
else:
replies = []
else:
args['parent'] = comment['id']
replies = list(self.comments.fetch(**args))
else:
comment['total_replies'] = 0
replies = []
comment['hidden_replies'] = comment['total_replies'] - \
len(replies)
comment['replies'] = self._process_fetched_list(replies, plain)
return JSON(rv, 200)
def _add_gravatar_image(self, item):
if not self.conf.getboolean('gravatar'):
return item
email = item['email'] or item['author'] or ''
email_md5_hash = md5(email)
gravatar_url = self.conf.get('gravatar-url')
item['gravatar_image'] = gravatar_url.format(email_md5_hash)
return item
def _process_fetched_list(self, fetched_list, plain=False):
for item in fetched_list:
key = item['email'] or item['remote_addr']
val = self.cache.get('hash', key.encode('utf-8'))
if val is None:
val = self.hash(key)
self.cache.set('hash', key.encode('utf-8'), val)
item['hash'] = val
item = self._add_gravatar_image(item)
for key in set(item.keys()) - API.FIELDS:
item.pop(key)
if plain:
for item in fetched_list:
item['text'] = self.isso.render(item['text'])
return fetched_list
"""
@apiDefine likeResponse
@apiSuccess {Number} likes
The (new) number of likes on the comment.
@apiSuccess {Number} dislikes
The (new) number of dislikes on the comment.
@apiSuccessExample Return updated vote counts:
{
"likes": 4,
"dislikes": 3
}
"""
"""
@api {post} /id/:id/like like
@apiGroup Comment
@apiName like
@apiVersion 0.12.6
@apiDescription
Puts a “like” on a comment. The author of a comment cannot like their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to like.
@apiExample {curl} Like comment with id 23:
curl -X POST 'https://comments.example.com/id/23/like'
@apiUse likeResponse
"""
@xhr
def like(self, environ, request, id):
nv = self.comments.vote(True, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /id/:id/dislike dislike
@apiGroup Comment
@apiName dislike
@apiVersion 0.12.6
@apiDescription
Puts a “dislike” on a comment. The author of a comment cannot dislike their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to dislike.
@apiExample {curl} Dislike comment with id 23:
curl -X POST 'https://comments.example.com/id/23/dislike'
@apiUse likeResponse
"""
@xhr
def dislike(self, environ, request, id):
nv = self.comments.vote(False, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /preview preview
@apiGroup Comment
@apiName preview
@apiVersion 0.12.6
@apiDescription
Render comment text using markdown.
@apiBody {String{3...65535}} text
(Raw) comment text
@apiSuccess {String} text
Rendered comment text
@apiExample {curl} Preview comment:
curl -X POST 'https://comments.example.com/preview' -d '{"text": "A sample comment"}'
@apiSuccessExample {json} Rendered comment:
{
"text": "<p>A sample comment</p>"
}
"""
def preview(self, environment, request):
data = request.json
if data.get("text", None) is None:
raise BadRequest("no text given")
return JSON({'text': self.isso.render(data["text"])}, 200)
"""
@api {post} /count Count comments
@apiGroup Thread
@apiName counts
@apiVersion 0.12.6
@apiDescription
Counts the number of comments on multiple threads. The requestor provides a list of thread uris. The number of comments on each thread is returned as a list, in the same order as the threads were requested. The counts include comments that are responses to comments, but only published comments (i.e. exclusing comments pending moderation).
@apiBody {Number[]} urls
Array of URLs for which to fetch comment counts
@apiExample {curl} Get the respective counts of 5 threads:
curl -X POST 'https://comments.example.com/count' -d '["/blog/firstPost.html", "/blog/controversalPost.html", "/blog/howToCode.html", "/blog/boringPost.html", "/blog/isso.html"]
@apiSuccessExample {json} Counts of 5 threads:
[2, 18, 4, 0, 3]
"""
def counts(self, environ, request):
data = request.json
if not isinstance(data, list) and not all(isinstance(x, str) for x in data):
raise BadRequest("JSON must be a list of URLs")
return JSON(self.comments.count(*data), 200)
"""
@api {get} /feed Atom feed for comments
@apiGroup Thread
@apiName feed
@apiVersion 0.12.6
@apiDescription
Provide an Atom feed for the given thread. Only available if `[rss] base` is set in server config. By default, up to 100 comments are returned.
@apiQuery {String} uri
The uri of the thread to display a feed for
@apiExample {curl} Get an Atom feed for /thread/foo in XML format:
curl 'https://comments.example.com/feed?uri=/thread/foo'
@apiSuccessExample Atom feed for /thread/foo:
<?xml version='1.0' encoding='utf-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:thr="http://purl.org/syndication/thread/1.0">
<updated>2022-05-24T20:38:04.032789Z</updated>
<id>tag:example.com,2018:/isso/thread/thread/foo</id>
<title>Comments for example.com/thread/foo</title>
<entry>
<id>tag:example.com,2018:/isso/1/2</id>
<title>Comment #2</title>
<updated>2022-05-24T20:38:04.032789Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-2" />
<content type="html"><p>And another</p></content>
</entry>
<entry>
<id>tag:example.com,2018:/isso/1/1</id>
<title>Comment #1</title>
<updated>2022-05-24T20:38:00.837703Z</updated>
<author>
<name>Jane Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-1" />
<content type="html"><p>A sample comment</p></content>
</entry>
</feed>
"""
@requires(str, 'uri')
def feed(self, environ, request, uri):
conf = self.isso.conf.section("rss")
if not conf.get('base'):
raise NotFound
args = {
'uri': uri,
'order_by': 'id',
'asc': 0,
'limit': conf.getint('limit')
}
try:
args['limit'] = max(int(request.args.get('limit')), args['limit'])
except TypeError:
pass
except ValueError:
return BadRequest("limit should be integer")
comments = self.comments.fetch(**args)
base = conf.get('base').rstrip('/')
hostname = urlparse(base).netloc
# Let's build an Atom feed.
# RFC 4287: https://tools.ietf.org/html/rfc4287
# RFC 4685: https://tools.ietf.org/html/rfc4685 (threading extensions)
# For IDs: http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id
feed = ET.Element('feed', {
'xmlns': 'http://www.w3.org/2005/Atom',
'xmlns:thr': 'http://purl.org/syndication/thread/1.0'
})
# For feed ID, we would use thread ID, but we may not have
# one. Therefore, we use the URI. We don't have a year
# either...
id = ET.SubElement(feed, 'id')
id.text = 'tag:{hostname},2018:/isso/thread{uri}'.format(
hostname=hostname, uri=uri)
# For title, we don't have much either. Be pretty generic.
title = ET.SubElement(feed, 'title')
title.text = 'Comments for {hostname}{uri}'.format(
hostname=hostname, uri=uri)
comment0 = None
for comment in comments:
if comment0 is None:
comment0 = comment
entry = ET.SubElement(feed, 'entry')
# We don't use a real date in ID either to help with
# threading.
id = ET.SubElement(entry, 'id')
id.text = 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['id'])
title = ET.SubElement(entry, 'title')
title.text = 'Comment #{}'.format(comment['id'])
updated = ET.SubElement(entry, 'updated')
updated.text = '{}Z'.format(datetime.fromtimestamp(
comment['modified'] or comment['created']).isoformat())
author = ET.SubElement(entry, 'author')
name = ET.SubElement(author, 'name')
name.text = comment['author']
ET.SubElement(entry, 'link', {
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['id'])
})
content = ET.SubElement(entry, 'content', {
'type': 'html',
})
content.text = self.isso.render(comment['text'])
if comment['parent']:
ET.SubElement(entry, 'thr:in-reply-to', {
'ref': 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['parent']),
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['parent'])
})
# Updated is mandatory. If we have comments, we use the date
# of last modification of the first one (which is the last
# one). Otherwise, we use a fixed date.
updated = ET.Element('updated')
if comment0 is None:
updated.text = '1970-01-01T01:00:00Z'
else:
updated.text = datetime.fromtimestamp(
comment0['modified'] or comment0['created']).isoformat()
updated.text += 'Z'
feed.insert(0, updated)
output = StringIO()
ET.ElementTree(feed).write(output,
encoding='utf-8',
xml_declaration=True)
response = XML(output.getvalue(), 200)
# Add an etag/last-modified value for caching purpose
if comment0 is None:
response.set_etag('empty')
response.last_modified = 0
else:
response.set_etag('{tid}-{id}'.format(**comment0))
response.last_modified = comment0['modified'] or comment0['created']
return response.make_conditional(request)
"""
@api {get} /config Fetch client config
@apiGroup Thread
@apiName config
@apiVersion 0.13.0
@apiDescription
Returns only the client configuration parameters that depend on server settings.
@apiSuccess {Object[]} config
The client configuration object.
@apiSuccess {Boolean} config.reply-to-self
Commenters can reply to their own comments.
@apiSuccess {Boolean} config.require-author
Commenters must enter valid Name.
@apiSuccess {Boolean} config.require-email
Commenters must enter valid email.
@apiSuccess {Boolean} config.reply-notifications
Enable reply notifications via E-mail.
@apiSuccess {Boolean} config.gravatar
Load images from Gravatar service instead of generating them. Also disables regular avatars (see below).
@apiSuccess {Boolean} config.avatar
To avoid having both regular avatars and Gravatars side-by-side,
setting `gravatar` will disable regular avatars. The `avatar` key will
only be sent by the server if `gravatar` is set.
@apiSuccess {Boolean} config.feed
Enable or disable the addition of a link to the feed for the comment
thread.
@apiExample {curl} get the client config:
curl 'https://comments.example.com/config'
@apiSuccessExample {json} Client config:
{
"config": {
"reply-to-self": false,
"require-email": false,
"require-author": false,
"reply-notifications": false,
"gravatar": true,
"avatar": false,
"feed": false
}
}
"""
def config(self, environment, request):
rv = {'config': self.public_conf}
return JSON(rv, 200)
"""
@api {get} /demo/ Isso demo page
@apiGroup Demo
@apiName demo
@apiVersion 0.13.0
@apiPrivate
@apiDescription
Displays a demonstration of Isso with a thread counter and comment widget.
@apiExample {curl} Get demo page
curl 'https://comments.example.com/demo/'
@apiSuccessExample {html} Demo page:
<!DOCTYPE html>
<head>
<title>Isso Demo</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div id="page">
<div id="wrapper" style="max-width: 900px; margin-left: auto; margin-right: auto;">
<h2><a href=".">Isso Demo</a></h2>
<script src="../js/embed.dev.js" data-isso="../" ></script>
<section>
<p>This is a link to a thead, which will display a comment counter:
<a href=".#isso-thread">How many Comments?</a></p>
<p>Below is the actual comment field.</p>
</section>
<section id="isso-thread" data-title="Isso Test"><noscript>Javascript needs to be activated to view comments.</noscript></section>
</div>
</div>
</body>
"""
def demo(self, env, req):
index = pkg_resources.resource_filename('isso', 'demo/index.html')
return send_from_directory(os_path.dirname(index), 'index.html', env)
"""
@api {post} /login/ Log in
@apiGroup Admin
@apiName login
@apiVersion 0.12.6
@apiPrivate
@apiDescription
Log in to admin, will redirect to `/admin/` on success. Must use form data, not `POST` JSON.
@apiBody {String} password
The admin password as set in `[admin] password` in the server config.
@apiExample {curl} Log in
curl -X POST 'https://comments.example.com/login' -F "password=strong_default_password_for_isso_admin" -c cookie.txt
@apiSuccessExample {html} Login successful:
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="https://comments.example.com/admin/">https://comments.example.com/admin/</a>. If not, click the link.
"""
def login(self, env, req):
if not self.isso.conf.getboolean("admin", "enabled"):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('disabled.html', isso_host_script=isso_host_script)
data = req.form
password = self.isso.conf.get("admin", "password")
if data['password'] and data['password'] == password:
response = redirect(re.sub(
r'/login/$',
'/admin/',
get_current_url(env, strip_querystring=True)
))
cookie = self.create_cookie(value=self.isso.sign({"logged": True}),
expires=datetime.now() + timedelta(1))
response.headers.add("Set-Cookie", cookie("admin-session"))
response.headers.add("X-Set-Cookie", cookie("isso-admin-session"))
return response
else:
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('login.html', isso_host_script=isso_host_script)
"""
@api {get} /admin/ Admin interface
@apiGroup Admin
@apiName admin
@apiVersion 0.12.6
@apiPrivate
@apiPermission admin
@apiDescription
Display an admin interface from which to manage comments. Will redirect to `/login` if not already logged in.
@apiQuery {Number} [page=0]
Page number
@apiQuery {Number{1,2,4}} [mode=2]
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiQuery {String{id,created,modified,likes,dislikes,tid}} [order_by=created]
Comment ordering
@apiQuery {Number{0,1}} [asc=0]
Ascending
@apiExample {curl} Listing of published comments:
curl 'https://comments.example.com/admin/?mode=1&page=0&order_by=modified&asc=1' -b cookie.txt
"""
def admin(self, env, req):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
if not self.isso.conf.getboolean("admin", "enabled"):
return render_template('disabled.html', isso_host_script=isso_host_script)
try:
data = self.isso.unsign(req.cookies.get('admin-session', ''),
max_age=60 * 60 * 24)
except BadSignature:
return render_template('login.html', isso_host_script=isso_host_script)
if not data or not data['logged']:
return render_template('login.html', isso_host_script=isso_host_script)
page_size = 100
page = int(req.args.get('page', 0))
order_by = req.args.get('order_by', 'created')
asc = int(req.args.get('asc', 0))
mode = int(req.args.get('mode', 2))
comments = self.comments.fetchall(mode=mode, page=page,
limit=page_size,
order_by=order_by,
asc=asc)
comments_enriched = []
for comment in list(comments):
comment['hash'] = self.isso.sign(comment['id'])
comments_enriched.append(comment)
comment_mode_count = self.comments.count_modes()
max_page = int(sum(comment_mode_count.values()) / 100)
return render_template('admin.html', comments=comments_enriched,
page=int(page), mode=int(mode),
conf=self.conf, max_page=max_page,
counts=comment_mode_count,
order_by=order_by, asc=asc,
isso_host_script=isso_host_script)
"""
@api {get} /latest latest
@apiGroup Comment
@apiName latest
@apiVersion 0.12.6
@apiDescription
Get the latest comments from the system, no matter which thread. Only available if `[general] latest-enabled` is set to `true` in server config.
@apiQuery {Number} limit
The quantity of last comments to retrieve
@apiExample {curl} Get the latest 5 comments
curl 'https://comments.example.com/latest?limit=5'
@apiUse commentResponse
@apiSuccessExample Example result:
[
{
"website": null,
"uri": "/some",
"author": null,
"parent": null,
"created": 1464912312.123416,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 3,
"likes": 1
},
{
"website": null,
"uri": "/other",
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 0
}
]
"""
def latest(self, environ, request):
# if the feature is not allowed, don't present the endpoint
if not self.conf.getboolean("latest-enabled"):
return NotFound(
"Unavailable because 'latest-enabled' not set by site admin"
)
# get and check the limit
bad_limit_msg = "Query parameter 'limit' is mandatory (integer, >0)"
try:
limit = int(request.args['limit'])
except (KeyError, ValueError):
return BadRequest(bad_limit_msg)
if limit <= 0:
return BadRequest(bad_limit_msg)
# retrieve the latest N comments from the DB
all_comments_gen = self.comments.fetchall(limit=None, order_by='created', mode='1')
comments = collections.deque(all_comments_gen, maxlen=limit)
# prepare a special set of fields (except text which is rendered specifically)
fields = {
'author',
'created',
'dislikes',
'id',
'likes',
'mode',
'modified',
'parent',
'text',
'uri',
'website',
}
# process the retrieved comments and build results
result = []
for comment in comments:
processed = {key: comment[key] for key in fields}
processed['text'] = self.isso.render(comment['text'])
result.append(processed)
return JSON(result, 200)
| schneidr | 9a0e1867e4ebe7e1ee7106584adb29b16880a955 | 73d9886100fd56cbceb38e2e00b84f52f0328a8c | This comment seems odd to me - urlparse handles port numbers in URLs fine, so there must be something else going on? | jelmer | 0 |
posativ/isso | 952 | Allow umlaut domains for website addresses | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [x] (If adding features:) I have added tests to cover my changes
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
Changed website validation to allow domain names containing umlauts
## Why is this necessary?
Resolves issue #951 | null | 2023-04-18 06:57:32+00:00 | 2023-08-04 13:01:56+00:00 | isso/views/comments.py | # -*- encoding: utf-8 -*-
import collections
import re
import time
import functools
import json # json.dumps to put URL in <script>
import pkg_resources
from configparser import NoOptionError
from datetime import datetime, timedelta
from html import escape
from io import BytesIO as StringIO
from os import path as os_path
from urllib.parse import unquote, urlparse
from xml.etree import ElementTree as ET
from itsdangerous import SignatureExpired, BadSignature
from werkzeug.exceptions import BadRequest, Forbidden, NotFound
from werkzeug.http import dump_cookie
from werkzeug.routing import Rule
from werkzeug.utils import redirect, send_from_directory
from werkzeug.wrappers import Response
from werkzeug.wsgi import get_current_url
from isso import utils, local
from isso.utils import (http, parse,
JSONResponse as JSON, XMLResponse as XML,
render_template)
from isso.utils.hash import md5, sha1
from isso.views import requires
# from Django apparently, looks good to me *duck*
__url_re = re.compile(
r'^'
r'(https?://)?'
# domain...
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)'
r'$', re.IGNORECASE)
def isurl(text):
return __url_re.match(text) is not None
def normalize(url):
if not url.startswith(("http://", "https://")):
return "http://" + url
return url
def xhr(func):
"""A decorator to check for CSRF on POST/PUT/DELETE using a <form>
element and JS to execute automatically (see #40 for a proof-of-concept).
When an attacker uses a <form> to downvote a comment, the browser *should*
add a `Content-Type: ...` header with three possible values:
* application/x-www-form-urlencoded
* multipart/form-data
* text/plain
If the header is not sent or requests `application/json`, the request is
not forged (XHR is restricted by CORS separately).
"""
"""
@apiDefine csrf
@apiHeader {String="application/json"} Content-Type
The content type must be set to `application/json` to prevent CSRF attacks.
"""
def dec(self, env, req, *args, **kwargs):
if req.content_type and not req.content_type.startswith("application/json"):
raise Forbidden("CSRF")
return func(self, env, req, *args, **kwargs)
return dec
class API(object):
FIELDS = set(['id', 'parent', 'text', 'author', 'website',
'mode', 'created', 'modified', 'likes', 'dislikes', 'hash', 'gravatar_image', 'notification'])
# comment fields, that can be submitted
ACCEPT = set(['text', 'author', 'website', 'email', 'parent', 'title', 'notification'])
VIEWS = [
('fetch', ('GET', '/')),
('new', ('POST', '/new')),
('counts', ('POST', '/count')),
('feed', ('GET', '/feed')),
('latest', ('GET', '/latest')),
('view', ('GET', '/id/<int:id>')),
('edit', ('PUT', '/id/<int:id>')),
('delete', ('DELETE', '/id/<int:id>')),
('unsubscribe', ('GET', '/id/<int:id>/unsubscribe/<string:email>/<string:key>')),
('moderate', ('GET', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('moderate', ('POST', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('like', ('POST', '/id/<int:id>/like')),
('dislike', ('POST', '/id/<int:id>/dislike')),
('demo', ('GET', '/demo/')),
('preview', ('POST', '/preview')),
('config', ('GET', '/config')),
('login', ('POST', '/login/')),
('admin', ('GET', '/admin/'))
]
def __init__(self, isso, hasher):
self.isso = isso
self.hash = hasher.uhash
self.cache = isso.cache
self.signal = isso.signal
self.conf = isso.conf.section("general")
self.moderated = isso.conf.getboolean("moderation", "enabled")
# this is similar to the wordpress setting "Comment author must have a previously approved comment"
try:
self.approve_if_email_previously_approved = isso.conf.getboolean("moderation", "approve-if-email-previously-approved")
except NoOptionError:
self.approve_if_email_previously_approved = False
try:
self.trusted_proxies = list(isso.conf.getiter("server", "trusted-proxies"))
except NoOptionError:
self.trusted_proxies = []
# These configuration records can be read out by client
self.public_conf = {}
self.public_conf["reply-to-self"] = isso.conf.getboolean("guard", "reply-to-self")
self.public_conf["require-email"] = isso.conf.getboolean("guard", "require-email")
self.public_conf["require-author"] = isso.conf.getboolean("guard", "require-author")
self.public_conf["reply-notifications"] = isso.conf.getboolean("general", "reply-notifications")
self.public_conf["gravatar"] = isso.conf.getboolean("general", "gravatar")
if self.public_conf["gravatar"]:
self.public_conf["avatar"] = False
self.public_conf["feed"] = False
rss = isso.conf.section("rss")
if rss and rss.get('base'):
self.public_conf["feed"] = True
self.guard = isso.db.guard
self.threads = isso.db.threads
self.comments = isso.db.comments
for (view, (method, path)) in self.VIEWS:
isso.urls.add(
Rule(path, methods=[method], endpoint=getattr(self, view)))
@classmethod
def verify(cls, comment):
if comment.get("text") is None:
return False, "text is missing"
if not isinstance(comment.get("parent"), (int, type(None))):
return False, "parent must be an integer or null"
for key in ("text", "author", "website", "email"):
if not isinstance(comment.get(key), (str, type(None))):
return False, "%s must be a string or null" % key
if len(comment["text"].rstrip()) < 3:
return False, "text is too short (minimum length: 3)"
if len(comment["text"]) > 65535:
return False, "text is too long (maximum length: 65535)"
if len(comment.get("email") or "") > 254:
return False, "http://tools.ietf.org/html/rfc5321#section-4.5.3"
if comment.get("website"):
if len(comment["website"]) > 254:
return False, "arbitrary length limit"
if not isurl(comment["website"]):
return False, "Website not Django-conform"
return True, ""
# Common definitions for apidoc follow:
"""
@apiDefine plainParam
@apiQuery {Number=0,1} [plain=0]
If set to `1`, the plain text entered by the user will be returned in the comments’ `text` attribute (instead of the rendered markdown).
"""
"""
@apiDefine commentResponse
@apiSuccess {Number} id
The comment’s id (assigned by the server).
@apiSuccess {Number} parent
Id of the comment this comment is a reply to. `null` if this is a top-level-comment.
@apiSuccess {Number=1,2,4} mode
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiSuccess {String} author
The comments’s author’s name or `null`.
@apiSuccess {String} website
The comment’s author’s website or `null`.
@apiSuccess {String} hash
A hash uniquely identifying the comment’s author.
@apiSuccess {Number} created
UNIX timestamp of the time the comment was created (on the server).
@apiSuccess {Number} modified
UNIX timestamp of the time the comment was last modified (on the server). `null` if the comment was not yet modified.
"""
"""
@apiDefine admin Admin access needed
Only available to a logged-in site admin. Requires a valid `admin-session` cookie.
"""
"""
@api {post} /new create new
@apiGroup Comment
@apiName new
@apiVersion 0.12.6
@apiDescription
Creates a new comment. The server issues a cookie per new comment which acts as
an authentication token to modify or delete the comment.
The token is cryptographically signed and expires automatically after 900 seconds (=15min) by default.
@apiUse csrf
@apiQuery {String} uri
The uri of the thread to create the comment on.
@apiBody {String{3...65535}} text
The comment’s raw text.
@apiBody {String} [author]
The comment’s author’s name.
@apiBody {String{...254}} [email]
The comment’s author’s email address.
@apiBody {String{...254}} [website]
The comment’s author’s website’s url. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiBody {Number} [parent]
The parent comment’s id if the new comment is a response to an existing comment.
@apiExample {curl} Create a reply to comment with id 15:
curl 'https://comments.example.com/new?uri=/thread/' -d '{"text": "Stop saying that! *isso*!", "author": "Max Rant", "email": "[email protected]", "parent": 15}' -H 'Content-Type: application/json' -c cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Success after the above request:
HTTP/1.1 201 CREATED
Set-Cookie: 1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
X-Set-Cookie: isso-1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
{
"website": null,
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>Stop saying that! <em>isso</em>!</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "e644f6ee43c0",
"id": 23,
"likes": 0
}
"""
@xhr
@requires(str, 'uri')
def new(self, environ, request, uri):
data = request.json
for field in set(data.keys()) - API.ACCEPT:
data.pop(field)
for key in ("author", "email", "website", "parent"):
data.setdefault(key, None)
valid, reason = API.verify(data)
if not valid:
return BadRequest(reason)
for field in ("author", "email", "website"):
if data.get(field) is not None:
data[field] = escape(data[field], quote=False)
if data.get("website"):
data["website"] = normalize(data["website"])
data['mode'] = 2 if self.moderated else 1
data['remote_addr'] = self._remote_addr(request)
with self.isso.lock:
if uri not in self.threads:
if not data.get('title'):
with http.curl('GET', local("origin"), uri) as resp:
if resp and resp.status == 200:
uri, title = parse.thread(resp.read(), id=uri)
else:
return NotFound('URI does not exist %s')
else:
title = data['title']
thread = self.threads.new(uri, title)
self.signal("comments.new:new-thread", thread)
else:
thread = self.threads[uri]
# notify extensions that the new comment is about to save
self.signal("comments.new:before-save", thread, data)
valid, reason = self.guard.validate(uri, data)
if not valid:
self.signal("comments.new:guard", reason)
raise Forbidden(reason)
with self.isso.lock:
# if email-based auto-moderation enabled, check for previously approved author
# right before approval.
if self.approve_if_email_previously_approved and self.comments.is_previously_approved_author(data['email']):
data['mode'] = 1
rv = self.comments.add(uri, data)
# notify extension, that the new comment has been successfully saved
self.signal("comments.new:after-save", thread, rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
rv["hash"] = self.hash(rv['email'] or rv['remote_addr'])
self.cache.set(
'hash', (rv['email'] or rv['remote_addr']).encode('utf-8'), rv['hash'])
rv = self._add_gravatar_image(rv)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
# success!
self.signal("comments.new:finish", thread, rv)
resp = JSON(rv, 202 if rv["mode"] == 2 else 201)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
def _remote_addr(self, request):
"""Return the anonymized IP address of the requester.
Takes into consideration a potential X-Forwarded-For HTTP header
if a necessary server.trusted-proxies configuration entry is set.
Recipe source: https://stackoverflow.com/a/22936947/636849
"""
remote_addr = request.remote_addr
if self.trusted_proxies:
route = request.access_route + [remote_addr]
remote_addr = next((addr for addr in reversed(route)
if addr not in self.trusted_proxies), remote_addr)
return utils.anonymize(str(remote_addr))
def create_cookie(self, **kwargs):
"""
Setting cookies to SameSite=None requires "Secure" attribute.
For http-only, we need to override the dump_cookie() default SameSite=None
or the cookie will be rejected.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite#samesitenone_requires_secure
"""
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
samesite = self.isso.conf.get("server", "samesite")
if isso_host_script.startswith("https://"):
secure = True
samesite = samesite or "None"
else:
secure = False
samesite = samesite or "Lax"
return functools.partial(dump_cookie, **kwargs,
secure=secure, samesite=samesite)
"""
@api {get} /id/:id view
@apiGroup Comment
@apiName view
@apiVersion 0.12.6
@apiDescription
View an existing comment, for the purpose of editing. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
@apiParam {Number} id
The id of the comment to view.
@apiUse plainParam
@apiExample {curl} View the comment with id 4:
curl 'https://comments.example.com/id/4' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample Example result:
{
"website": null,
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 1
}
"""
def view(self, environ, request, id):
rv = self.comments.get(id)
if rv is None:
raise NotFound
try:
self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
if request.args.get('plain', '0') == '0':
rv['text'] = self.isso.render(rv['text'])
return JSON(rv, 200)
"""
@api {put} /id/:id edit
@apiGroup Comment
@apiName edit
@apiVersion 0.12.6
@apiDescription
Edit an existing comment. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details. Editing a comment will set a new edit cookie in the response.
@apiUse csrf
@apiParam {Number} id
The id of the comment to edit.
@apiBody {String{3...65535}} text
A new (raw) text for the comment.
@apiBody {String} [author]
The modified comment’s author’s name.
@apiBody {String{...254}} [website]
The modified comment’s author’s website. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiExample {curl} Edit comment with id 23:
curl -X PUT 'https://comments.example.com/id/23' -d {"text": "I see your point. However, I still disagree.", "website": "maxrant.important.com"} -H 'Content-Type: application/json' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Example response:
HTTP/1.1 200 OK
{
"website": "maxrant.important.com",
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>I see your point. However, I still disagree.</p>",
"dislikes": 0,
"modified": 1464943439.073961,
"mode": 1,
"id": 23,
"likes": 0
}
"""
@xhr
def edit(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
data = request.json
if data.get("text") is None or len(data["text"]) < 3:
raise BadRequest("no text given")
for key in set(data.keys()) - set(["text", "author", "website"]):
data.pop(key)
data['modified'] = time.time()
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
resp = JSON(rv, 200)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
"""
@api {delete} /id/:id delete
@apiGroup Comment
@apiName delete
@apiVersion 0.12.6
@apiDescription
Delete an existing comment. Deleting a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
Returns either `null` or a comment with an empty text value when the comment is still referenced by other comments.
@apiUse csrf
@apiParam {Number} id
Id of the comment to delete.
@apiExample {curl} Delete comment with id 14:
curl -X DELETE 'https://comments.example.com/id/14' -b cookie.txt
@apiSuccessExample Successful deletion returns null and deletes cookie:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
null
@apiSuccessExample {json} Comment still referenced by another:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
{
"id": 14,
"parent": null,
"created": 1653432621.0512516,
"modified": 1653434488.571937,
"mode": 4,
"text": "",
"author": null,
"website": null,
"likes": 0,
"dislikes": 0,
"notification": 0
}
"""
@xhr
def delete(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ""))
except (SignatureExpired, BadSignature):
raise Forbidden
else:
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
with self.isso.lock:
rv = self.comments.delete(id)
if rv:
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.delete", id)
resp = JSON(rv, 200)
cookie = self.create_cookie(expires=0, max_age=0)
resp.headers.add("Set-Cookie", cookie(str(id)))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % id))
return resp
"""
@api {get} /id/:id/unsubscribe/:email/:key unsubscribe
@apiGroup Comment
@apiName unsubscribe
@apiVersion 0.12.6
@apiDescription
Opt out from getting any further email notifications about replies to a particular comment. In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by isso.
@apiParam {Number} id
The id of the comment to unsubscribe from replies to.
@apiParam {String} email
The email address of the subscriber.
@apiParam {String} key
The key to authenticate the subscriber.
@apiExample {curl} Unsubscribe Alice from replies to comment with id 13:
curl -X GET 'https://comments.example.com/id/13/unsubscribe/[email protected]/WyJ1bnN1YnNjcmliZSIsImFsaWNlQGV4YW1wbGUuY29tIl0.DdcH9w.Wxou-l22ySLFkKUs7RUHnoM8Kos'
@apiSuccessExample {html} Using GET:
<!DOCTYPE html>
<html>
<head>Successfully unsubscribed</head>
<body>
<p>You have been unsubscribed from replies in the given conversation.</p>
</body>
</html>
"""
def unsubscribe(self, environ, request, id, email, key):
email = unquote(email)
try:
rv = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
if not isinstance(rv, list) or len(rv) != 2:
raise Forbidden
if rv[0] != 'unsubscribe' or rv[1] != email:
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
with self.isso.lock:
self.comments.unsubscribe(email, id)
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
" <title>Successfully unsubscribed</title>"
"</head>"
"<body>"
" <p>You have been unsubscribed from replies in the given conversation.</p>"
"</body>"
"</html>")
return Response(modal, 200, content_type="text/html")
"""
@api {post} /id/:id/:action/:key moderate
@apiGroup Comment
@apiName moderate
@apiVersion 0.12.6
@apiDescription
Publish or delete a comment that is in the moderation queue (mode `2`). In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by Isso or provided in the admin interface.
This endpoint can also be used with a `GET` request. In that case, a html page is returned that asks the user whether they are sure to perform the selected action. If they select “yes”, the query is repeated using `POST`.
@apiParam {Number} id
The id of the comment to moderate.
@apiParam {String=activate,edit,delete} action
- `activate` to publish the comment (change its mode to `1`).
- `edit`: Send `text`, `author`, `email` and `website` via `POST`.
To be used from the admin interface. Better use the `edit` `PUT` endpoint.
- `delete` to delete the comment.
@apiParam {String} key
The moderation key to authenticate the moderation.
@apiExample {curl} delete comment with id 13:
curl -X POST 'https://comments.example.com/id/13/delete/MTM.CjL6Fg.REIdVXa-whJS_x8ojQL4RrXnuF4'
@apiSuccessExample {html} Request deletion using GET:
<!DOCTYPE html>
<html>
<head>
<script>
if (confirm('Delete: Are you sure?')) {
xhr = new XMLHttpRequest;
xhr.open('POST', window.location.href);
xhr.send(null);
xhr.onload = function() {
window.location.href = "https://example.com/example-thread/#isso-13";
};
}
</script>
@apiSuccessExample Delete using POST:
Comment has been deleted
@apiSuccessExample Activate using POST:
Comment has been activated
"""
def moderate(self, environ, request, id, action, key):
try:
id = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
thread = self.threads.get(item['tid'])
link = local("origin") + thread["uri"] + "#isso-%i" % item["id"]
if request.method == "GET":
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
"<script>"
" if (confirm('%s: Are you sure?')) {"
" xhr = new XMLHttpRequest;"
" xhr.open('POST', window.location.href);"
" xhr.send(null);"
" xhr.onload = function() {"
" window.location.href = %s;"
" };"
" }"
"</script>" % (action.capitalize(), json.dumps(link)))
return Response(modal, 200, content_type="text/html")
if action == "activate":
if item['mode'] == 1:
return Response("Already activated", 200)
with self.isso.lock:
self.comments.activate(id)
self.signal("comments.activate", thread, item)
return Response("Comment has been activated", 200)
elif action == "edit":
data = request.json
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
return JSON(rv, 200)
else:
with self.isso.lock:
self.comments.delete(id)
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
self.signal("comments.delete", id)
return Response("Comment has been deleted", 200)
"""
@api {get} / Get comments
@apiGroup Thread
@apiName fetch
@apiVersion 0.12.6
@apiDescription Queries the publicly visible comments of a thread.
@apiQuery {String} uri
The URI of thread to get the comments from.
@apiQuery {Number} [parent]
Return only comments that are children of the comment with the provided ID.
@apiUse plainParam
@apiQuery {Number} [limit]
The maximum number of returned top-level comments. Omit for unlimited results.
@apiQuery {Number} [nested_limit]
The maximum number of returned nested comments per comment. Omit for unlimited results.
@apiQuery {Number} [after]
Includes only comments were added after the provided UNIX timestamp.
@apiSuccess {Number} id
Id of the comment `replies` is the list of replies of. `null` for the list of top-level comments.
@apiSuccess {Number} total_replies
The number of replies if the `limit` parameter was not set. If `after` is set to `X`, this is the number of comments that were created after `X`. So setting `after` may change this value!
@apiSuccess {Number} hidden_replies
The number of comments that were omitted from the results because of the `limit` request parameter. Usually, this will be `total_replies` - `limit`.
@apiSuccess {Object[]} replies
The list of comments. Each comment also has the `total_replies`, `replies`, `id` and `hidden_replies` properties to represent nested comments.
@apiSuccess {Object[]} config
Object holding only the client configuration parameters that depend on server settings. Will be dropped in a future version of Isso. Use the dedicated `/config` endpoint instead.
@apiExample {curl} Get 2 comments with 5 responses:
curl 'https://comments.example.com/?uri=/thread/&limit=2&nested_limit=5'
@apiSuccessExample {json} Example response:
{
"total_replies": 14,
"replies": [
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.732863,
"text": "<p>Hello, World!</p>",
"total_replies": 1,
"hidden_replies": 0,
"dislikes": 2,
"modified": null,
"mode": 1,
"replies": [
{
"website": null,
"author": null,
"parent": 1,
"created": 1464818460.769638,
"text": "<p>Hi, now some Markdown: <em>Italic</em>, <strong>bold</strong>, <code>monospace</code>.</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "2af4e1a6c96a",
"id": 2,
"likes": 2
}
],
"hash": "1cb6cc0309a2",
"id": 1,
"likes": 2
},
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.80574,
"text": "<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Accusantium at commodi cum deserunt dolore, error fugiat harum incidunt, ipsa ipsum mollitia nam provident rerum sapiente suscipit tempora vitae? Est, qui?</p>",
"total_replies": 0,
"hidden_replies": 0,
"dislikes": 0,
"modified": null,
"mode": 1,
"replies": [],
"hash": "1cb6cc0309a2",
"id": 3,
"likes": 0
},
"id": null,
"hidden_replies": 12
}
"""
@requires(str, 'uri')
def fetch(self, environ, request, uri):
args = {
'uri': uri,
'after': request.args.get('after', 0)
}
try:
args['limit'] = int(request.args.get('limit'))
except TypeError:
args['limit'] = None
except ValueError:
return BadRequest("limit should be integer")
if request.args.get('parent') is not None:
try:
args['parent'] = int(request.args.get('parent'))
root_id = args['parent']
except ValueError:
return BadRequest("parent should be integer")
else:
args['parent'] = None
root_id = None
plain = request.args.get('plain', '0') == '0'
reply_counts = self.comments.reply_count(uri, after=args['after'])
if args['limit'] == 0:
root_list = []
else:
root_list = list(self.comments.fetch(**args))
if root_id not in reply_counts:
reply_counts[root_id] = 0
try:
nested_limit = int(request.args.get('nested_limit'))
except TypeError:
nested_limit = None
except ValueError:
return BadRequest("nested_limit should be integer")
rv = {
'id': root_id,
'total_replies': reply_counts[root_id],
'hidden_replies': reply_counts[root_id] - len(root_list),
'replies': self._process_fetched_list(root_list, plain),
'config': self.public_conf
}
# We are only checking for one level deep comments
if root_id is None:
for comment in rv['replies']:
if comment['id'] in reply_counts:
comment['total_replies'] = reply_counts[comment['id']]
if nested_limit is not None:
if nested_limit > 0:
args['parent'] = comment['id']
args['limit'] = nested_limit
replies = list(self.comments.fetch(**args))
else:
replies = []
else:
args['parent'] = comment['id']
replies = list(self.comments.fetch(**args))
else:
comment['total_replies'] = 0
replies = []
comment['hidden_replies'] = comment['total_replies'] - \
len(replies)
comment['replies'] = self._process_fetched_list(replies, plain)
return JSON(rv, 200)
def _add_gravatar_image(self, item):
if not self.conf.getboolean('gravatar'):
return item
email = item['email'] or item['author'] or ''
email_md5_hash = md5(email)
gravatar_url = self.conf.get('gravatar-url')
item['gravatar_image'] = gravatar_url.format(email_md5_hash)
return item
def _process_fetched_list(self, fetched_list, plain=False):
for item in fetched_list:
key = item['email'] or item['remote_addr']
val = self.cache.get('hash', key.encode('utf-8'))
if val is None:
val = self.hash(key)
self.cache.set('hash', key.encode('utf-8'), val)
item['hash'] = val
item = self._add_gravatar_image(item)
for key in set(item.keys()) - API.FIELDS:
item.pop(key)
if plain:
for item in fetched_list:
item['text'] = self.isso.render(item['text'])
return fetched_list
"""
@apiDefine likeResponse
@apiSuccess {Number} likes
The (new) number of likes on the comment.
@apiSuccess {Number} dislikes
The (new) number of dislikes on the comment.
@apiSuccessExample Return updated vote counts:
{
"likes": 4,
"dislikes": 3
}
"""
"""
@api {post} /id/:id/like like
@apiGroup Comment
@apiName like
@apiVersion 0.12.6
@apiDescription
Puts a “like” on a comment. The author of a comment cannot like their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to like.
@apiExample {curl} Like comment with id 23:
curl -X POST 'https://comments.example.com/id/23/like'
@apiUse likeResponse
"""
@xhr
def like(self, environ, request, id):
nv = self.comments.vote(True, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /id/:id/dislike dislike
@apiGroup Comment
@apiName dislike
@apiVersion 0.12.6
@apiDescription
Puts a “dislike” on a comment. The author of a comment cannot dislike their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to dislike.
@apiExample {curl} Dislike comment with id 23:
curl -X POST 'https://comments.example.com/id/23/dislike'
@apiUse likeResponse
"""
@xhr
def dislike(self, environ, request, id):
nv = self.comments.vote(False, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /preview preview
@apiGroup Comment
@apiName preview
@apiVersion 0.12.6
@apiDescription
Render comment text using markdown.
@apiBody {String{3...65535}} text
(Raw) comment text
@apiSuccess {String} text
Rendered comment text
@apiExample {curl} Preview comment:
curl -X POST 'https://comments.example.com/preview' -d '{"text": "A sample comment"}'
@apiSuccessExample {json} Rendered comment:
{
"text": "<p>A sample comment</p>"
}
"""
def preview(self, environment, request):
data = request.json
if data.get("text", None) is None:
raise BadRequest("no text given")
return JSON({'text': self.isso.render(data["text"])}, 200)
"""
@api {post} /count Count comments
@apiGroup Thread
@apiName counts
@apiVersion 0.12.6
@apiDescription
Counts the number of comments on multiple threads. The requestor provides a list of thread uris. The number of comments on each thread is returned as a list, in the same order as the threads were requested. The counts include comments that are responses to comments, but only published comments (i.e. exclusing comments pending moderation).
@apiBody {Number[]} urls
Array of URLs for which to fetch comment counts
@apiExample {curl} Get the respective counts of 5 threads:
curl -X POST 'https://comments.example.com/count' -d '["/blog/firstPost.html", "/blog/controversalPost.html", "/blog/howToCode.html", "/blog/boringPost.html", "/blog/isso.html"]
@apiSuccessExample {json} Counts of 5 threads:
[2, 18, 4, 0, 3]
"""
def counts(self, environ, request):
data = request.json
if not isinstance(data, list) and not all(isinstance(x, str) for x in data):
raise BadRequest("JSON must be a list of URLs")
return JSON(self.comments.count(*data), 200)
"""
@api {get} /feed Atom feed for comments
@apiGroup Thread
@apiName feed
@apiVersion 0.12.6
@apiDescription
Provide an Atom feed for the given thread. Only available if `[rss] base` is set in server config. By default, up to 100 comments are returned.
@apiQuery {String} uri
The uri of the thread to display a feed for
@apiExample {curl} Get an Atom feed for /thread/foo in XML format:
curl 'https://comments.example.com/feed?uri=/thread/foo'
@apiSuccessExample Atom feed for /thread/foo:
<?xml version='1.0' encoding='utf-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:thr="http://purl.org/syndication/thread/1.0">
<updated>2022-05-24T20:38:04.032789Z</updated>
<id>tag:example.com,2018:/isso/thread/thread/foo</id>
<title>Comments for example.com/thread/foo</title>
<entry>
<id>tag:example.com,2018:/isso/1/2</id>
<title>Comment #2</title>
<updated>2022-05-24T20:38:04.032789Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-2" />
<content type="html"><p>And another</p></content>
</entry>
<entry>
<id>tag:example.com,2018:/isso/1/1</id>
<title>Comment #1</title>
<updated>2022-05-24T20:38:00.837703Z</updated>
<author>
<name>Jane Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-1" />
<content type="html"><p>A sample comment</p></content>
</entry>
</feed>
"""
@requires(str, 'uri')
def feed(self, environ, request, uri):
conf = self.isso.conf.section("rss")
if not conf.get('base'):
raise NotFound
args = {
'uri': uri,
'order_by': 'id',
'asc': 0,
'limit': conf.getint('limit')
}
try:
args['limit'] = max(int(request.args.get('limit')), args['limit'])
except TypeError:
pass
except ValueError:
return BadRequest("limit should be integer")
comments = self.comments.fetch(**args)
base = conf.get('base').rstrip('/')
hostname = urlparse(base).netloc
# Let's build an Atom feed.
# RFC 4287: https://tools.ietf.org/html/rfc4287
# RFC 4685: https://tools.ietf.org/html/rfc4685 (threading extensions)
# For IDs: http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id
feed = ET.Element('feed', {
'xmlns': 'http://www.w3.org/2005/Atom',
'xmlns:thr': 'http://purl.org/syndication/thread/1.0'
})
# For feed ID, we would use thread ID, but we may not have
# one. Therefore, we use the URI. We don't have a year
# either...
id = ET.SubElement(feed, 'id')
id.text = 'tag:{hostname},2018:/isso/thread{uri}'.format(
hostname=hostname, uri=uri)
# For title, we don't have much either. Be pretty generic.
title = ET.SubElement(feed, 'title')
title.text = 'Comments for {hostname}{uri}'.format(
hostname=hostname, uri=uri)
comment0 = None
for comment in comments:
if comment0 is None:
comment0 = comment
entry = ET.SubElement(feed, 'entry')
# We don't use a real date in ID either to help with
# threading.
id = ET.SubElement(entry, 'id')
id.text = 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['id'])
title = ET.SubElement(entry, 'title')
title.text = 'Comment #{}'.format(comment['id'])
updated = ET.SubElement(entry, 'updated')
updated.text = '{}Z'.format(datetime.fromtimestamp(
comment['modified'] or comment['created']).isoformat())
author = ET.SubElement(entry, 'author')
name = ET.SubElement(author, 'name')
name.text = comment['author']
ET.SubElement(entry, 'link', {
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['id'])
})
content = ET.SubElement(entry, 'content', {
'type': 'html',
})
content.text = self.isso.render(comment['text'])
if comment['parent']:
ET.SubElement(entry, 'thr:in-reply-to', {
'ref': 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['parent']),
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['parent'])
})
# Updated is mandatory. If we have comments, we use the date
# of last modification of the first one (which is the last
# one). Otherwise, we use a fixed date.
updated = ET.Element('updated')
if comment0 is None:
updated.text = '1970-01-01T01:00:00Z'
else:
updated.text = datetime.fromtimestamp(
comment0['modified'] or comment0['created']).isoformat()
updated.text += 'Z'
feed.insert(0, updated)
output = StringIO()
ET.ElementTree(feed).write(output,
encoding='utf-8',
xml_declaration=True)
response = XML(output.getvalue(), 200)
# Add an etag/last-modified value for caching purpose
if comment0 is None:
response.set_etag('empty')
response.last_modified = 0
else:
response.set_etag('{tid}-{id}'.format(**comment0))
response.last_modified = comment0['modified'] or comment0['created']
return response.make_conditional(request)
"""
@api {get} /config Fetch client config
@apiGroup Thread
@apiName config
@apiVersion 0.13.0
@apiDescription
Returns only the client configuration parameters that depend on server settings.
@apiSuccess {Object[]} config
The client configuration object.
@apiSuccess {Boolean} config.reply-to-self
Commenters can reply to their own comments.
@apiSuccess {Boolean} config.require-author
Commenters must enter valid Name.
@apiSuccess {Boolean} config.require-email
Commenters must enter valid email.
@apiSuccess {Boolean} config.reply-notifications
Enable reply notifications via E-mail.
@apiSuccess {Boolean} config.gravatar
Load images from Gravatar service instead of generating them. Also disables regular avatars (see below).
@apiSuccess {Boolean} config.avatar
To avoid having both regular avatars and Gravatars side-by-side,
setting `gravatar` will disable regular avatars. The `avatar` key will
only be sent by the server if `gravatar` is set.
@apiSuccess {Boolean} config.feed
Enable or disable the addition of a link to the feed for the comment
thread.
@apiExample {curl} get the client config:
curl 'https://comments.example.com/config'
@apiSuccessExample {json} Client config:
{
"config": {
"reply-to-self": false,
"require-email": false,
"require-author": false,
"reply-notifications": false,
"gravatar": true,
"avatar": false,
"feed": false
}
}
"""
def config(self, environment, request):
rv = {'config': self.public_conf}
return JSON(rv, 200)
"""
@api {get} /demo/ Isso demo page
@apiGroup Demo
@apiName demo
@apiVersion 0.13.0
@apiPrivate
@apiDescription
Displays a demonstration of Isso with a thread counter and comment widget.
@apiExample {curl} Get demo page
curl 'https://comments.example.com/demo/'
@apiSuccessExample {html} Demo page:
<!DOCTYPE html>
<head>
<title>Isso Demo</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div id="page">
<div id="wrapper" style="max-width: 900px; margin-left: auto; margin-right: auto;">
<h2><a href=".">Isso Demo</a></h2>
<script src="../js/embed.dev.js" data-isso="../" ></script>
<section>
<p>This is a link to a thead, which will display a comment counter:
<a href=".#isso-thread">How many Comments?</a></p>
<p>Below is the actual comment field.</p>
</section>
<section id="isso-thread" data-title="Isso Test"><noscript>Javascript needs to be activated to view comments.</noscript></section>
</div>
</div>
</body>
"""
def demo(self, env, req):
index = pkg_resources.resource_filename('isso', 'demo/index.html')
return send_from_directory(os_path.dirname(index), 'index.html', env)
"""
@api {post} /login/ Log in
@apiGroup Admin
@apiName login
@apiVersion 0.12.6
@apiPrivate
@apiDescription
Log in to admin, will redirect to `/admin/` on success. Must use form data, not `POST` JSON.
@apiBody {String} password
The admin password as set in `[admin] password` in the server config.
@apiExample {curl} Log in
curl -X POST 'https://comments.example.com/login' -F "password=strong_default_password_for_isso_admin" -c cookie.txt
@apiSuccessExample {html} Login successful:
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="https://comments.example.com/admin/">https://comments.example.com/admin/</a>. If not, click the link.
"""
def login(self, env, req):
if not self.isso.conf.getboolean("admin", "enabled"):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('disabled.html', isso_host_script=isso_host_script)
data = req.form
password = self.isso.conf.get("admin", "password")
if data['password'] and data['password'] == password:
response = redirect(re.sub(
r'/login/$',
'/admin/',
get_current_url(env, strip_querystring=True)
))
cookie = self.create_cookie(value=self.isso.sign({"logged": True}),
expires=datetime.now() + timedelta(1))
response.headers.add("Set-Cookie", cookie("admin-session"))
response.headers.add("X-Set-Cookie", cookie("isso-admin-session"))
return response
else:
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('login.html', isso_host_script=isso_host_script)
"""
@api {get} /admin/ Admin interface
@apiGroup Admin
@apiName admin
@apiVersion 0.12.6
@apiPrivate
@apiPermission admin
@apiDescription
Display an admin interface from which to manage comments. Will redirect to `/login` if not already logged in.
@apiQuery {Number} [page=0]
Page number
@apiQuery {Number{1,2,4}} [mode=2]
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiQuery {String{id,created,modified,likes,dislikes,tid}} [order_by=created]
Comment ordering
@apiQuery {Number{0,1}} [asc=0]
Ascending
@apiExample {curl} Listing of published comments:
curl 'https://comments.example.com/admin/?mode=1&page=0&order_by=modified&asc=1' -b cookie.txt
"""
def admin(self, env, req):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
if not self.isso.conf.getboolean("admin", "enabled"):
return render_template('disabled.html', isso_host_script=isso_host_script)
try:
data = self.isso.unsign(req.cookies.get('admin-session', ''),
max_age=60 * 60 * 24)
except BadSignature:
return render_template('login.html', isso_host_script=isso_host_script)
if not data or not data['logged']:
return render_template('login.html', isso_host_script=isso_host_script)
page_size = 100
page = int(req.args.get('page', 0))
order_by = req.args.get('order_by', 'created')
asc = int(req.args.get('asc', 0))
mode = int(req.args.get('mode', 2))
comments = self.comments.fetchall(mode=mode, page=page,
limit=page_size,
order_by=order_by,
asc=asc)
comments_enriched = []
for comment in list(comments):
comment['hash'] = self.isso.sign(comment['id'])
comments_enriched.append(comment)
comment_mode_count = self.comments.count_modes()
max_page = int(sum(comment_mode_count.values()) / 100)
return render_template('admin.html', comments=comments_enriched,
page=int(page), mode=int(mode),
conf=self.conf, max_page=max_page,
counts=comment_mode_count,
order_by=order_by, asc=asc,
isso_host_script=isso_host_script)
"""
@api {get} /latest latest
@apiGroup Comment
@apiName latest
@apiVersion 0.12.6
@apiDescription
Get the latest comments from the system, no matter which thread. Only available if `[general] latest-enabled` is set to `true` in server config.
@apiQuery {Number} limit
The quantity of last comments to retrieve
@apiExample {curl} Get the latest 5 comments
curl 'https://comments.example.com/latest?limit=5'
@apiUse commentResponse
@apiSuccessExample Example result:
[
{
"website": null,
"uri": "/some",
"author": null,
"parent": null,
"created": 1464912312.123416,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 3,
"likes": 1
},
{
"website": null,
"uri": "/other",
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 0
}
]
"""
def latest(self, environ, request):
# if the feature is not allowed, don't present the endpoint
if not self.conf.getboolean("latest-enabled"):
return NotFound(
"Unavailable because 'latest-enabled' not set by site admin"
)
# get and check the limit
bad_limit_msg = "Query parameter 'limit' is mandatory (integer, >0)"
try:
limit = int(request.args['limit'])
except (KeyError, ValueError):
return BadRequest(bad_limit_msg)
if limit <= 0:
return BadRequest(bad_limit_msg)
# retrieve the latest N comments from the DB
all_comments_gen = self.comments.fetchall(limit=None, order_by='created', mode='1')
comments = collections.deque(all_comments_gen, maxlen=limit)
# prepare a special set of fields (except text which is rendered specifically)
fields = {
'author',
'created',
'dislikes',
'id',
'likes',
'mode',
'modified',
'parent',
'text',
'uri',
'website',
}
# process the retrieved comments and build results
result = []
for comment in comments:
processed = {key: comment[key] for key in fields}
processed['text'] = self.isso.render(comment['text'])
result.append(processed)
return JSON(result, 200)
| # -*- encoding: utf-8 -*-
import collections
import re
import time
import functools
import json # json.dumps to put URL in <script>
import pkg_resources
from configparser import NoOptionError
from datetime import datetime, timedelta
from html import escape
from io import BytesIO as StringIO
from os import path as os_path
from urllib.parse import unquote, urlparse
from xml.etree import ElementTree as ET
from itsdangerous import SignatureExpired, BadSignature
from werkzeug.exceptions import BadRequest, Forbidden, NotFound
from werkzeug.http import dump_cookie
from werkzeug.routing import Rule
from werkzeug.utils import redirect, send_from_directory
from werkzeug.wrappers import Response
from werkzeug.wsgi import get_current_url
from isso import utils, local
from isso.utils import (http, parse,
JSONResponse as JSON, XMLResponse as XML,
render_template)
from isso.utils.hash import md5, sha1
from isso.views import requires
# from Django apparently, looks good to me *duck*
__url_re = re.compile(
r'^'
r'(https?://)?'
# domain...
r'(?:(?:[\w](?:[\w-]{0,61}[\w])?\.)+(?:[\w]{2,6}\.?|[\w-]{2,}\.?)|'
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)'
r'$', re.IGNORECASE | re.UNICODE)
def isurl(text):
return __url_re.match(text) is not None
def normalize(url):
if not url.startswith(("http://", "https://")):
return "http://" + url
return url
def xhr(func):
"""A decorator to check for CSRF on POST/PUT/DELETE using a <form>
element and JS to execute automatically (see #40 for a proof-of-concept).
When an attacker uses a <form> to downvote a comment, the browser *should*
add a `Content-Type: ...` header with three possible values:
* application/x-www-form-urlencoded
* multipart/form-data
* text/plain
If the header is not sent or requests `application/json`, the request is
not forged (XHR is restricted by CORS separately).
"""
"""
@apiDefine csrf
@apiHeader {String="application/json"} Content-Type
The content type must be set to `application/json` to prevent CSRF attacks.
"""
def dec(self, env, req, *args, **kwargs):
if req.content_type and not req.content_type.startswith("application/json"):
raise Forbidden("CSRF")
return func(self, env, req, *args, **kwargs)
return dec
class API(object):
FIELDS = set(['id', 'parent', 'text', 'author', 'website',
'mode', 'created', 'modified', 'likes', 'dislikes', 'hash', 'gravatar_image', 'notification'])
# comment fields, that can be submitted
ACCEPT = set(['text', 'author', 'website', 'email', 'parent', 'title', 'notification'])
VIEWS = [
('fetch', ('GET', '/')),
('new', ('POST', '/new')),
('counts', ('POST', '/count')),
('feed', ('GET', '/feed')),
('latest', ('GET', '/latest')),
('view', ('GET', '/id/<int:id>')),
('edit', ('PUT', '/id/<int:id>')),
('delete', ('DELETE', '/id/<int:id>')),
('unsubscribe', ('GET', '/id/<int:id>/unsubscribe/<string:email>/<string:key>')),
('moderate', ('GET', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('moderate', ('POST', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('like', ('POST', '/id/<int:id>/like')),
('dislike', ('POST', '/id/<int:id>/dislike')),
('demo', ('GET', '/demo/')),
('preview', ('POST', '/preview')),
('config', ('GET', '/config')),
('login', ('POST', '/login/')),
('admin', ('GET', '/admin/'))
]
def __init__(self, isso, hasher):
self.isso = isso
self.hash = hasher.uhash
self.cache = isso.cache
self.signal = isso.signal
self.conf = isso.conf.section("general")
self.moderated = isso.conf.getboolean("moderation", "enabled")
# this is similar to the wordpress setting "Comment author must have a previously approved comment"
try:
self.approve_if_email_previously_approved = isso.conf.getboolean("moderation", "approve-if-email-previously-approved")
except NoOptionError:
self.approve_if_email_previously_approved = False
try:
self.trusted_proxies = list(isso.conf.getiter("server", "trusted-proxies"))
except NoOptionError:
self.trusted_proxies = []
# These configuration records can be read out by client
self.public_conf = {}
self.public_conf["reply-to-self"] = isso.conf.getboolean("guard", "reply-to-self")
self.public_conf["require-email"] = isso.conf.getboolean("guard", "require-email")
self.public_conf["require-author"] = isso.conf.getboolean("guard", "require-author")
self.public_conf["reply-notifications"] = isso.conf.getboolean("general", "reply-notifications")
self.public_conf["gravatar"] = isso.conf.getboolean("general", "gravatar")
if self.public_conf["gravatar"]:
self.public_conf["avatar"] = False
self.public_conf["feed"] = False
rss = isso.conf.section("rss")
if rss and rss.get('base'):
self.public_conf["feed"] = True
self.guard = isso.db.guard
self.threads = isso.db.threads
self.comments = isso.db.comments
for (view, (method, path)) in self.VIEWS:
isso.urls.add(
Rule(path, methods=[method], endpoint=getattr(self, view)))
@classmethod
def verify(cls, comment):
if comment.get("text") is None:
return False, "text is missing"
if not isinstance(comment.get("parent"), (int, type(None))):
return False, "parent must be an integer or null"
for key in ("text", "author", "website", "email"):
if not isinstance(comment.get(key), (str, type(None))):
return False, "%s must be a string or null" % key
if len(comment["text"].rstrip()) < 3:
return False, "text is too short (minimum length: 3)"
if len(comment["text"]) > 65535:
return False, "text is too long (maximum length: 65535)"
if len(comment.get("email") or "") > 254:
return False, "http://tools.ietf.org/html/rfc5321#section-4.5.3"
if comment.get("website"):
if len(comment["website"]) > 254:
return False, "arbitrary length limit"
if not isurl(comment["website"]):
return False, "Website not Django-conform"
return True, ""
# Common definitions for apidoc follow:
"""
@apiDefine plainParam
@apiQuery {Number=0,1} [plain=0]
If set to `1`, the plain text entered by the user will be returned in the comments’ `text` attribute (instead of the rendered markdown).
"""
"""
@apiDefine commentResponse
@apiSuccess {Number} id
The comment’s id (assigned by the server).
@apiSuccess {Number} parent
Id of the comment this comment is a reply to. `null` if this is a top-level-comment.
@apiSuccess {Number=1,2,4} mode
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiSuccess {String} author
The comments’s author’s name or `null`.
@apiSuccess {String} website
The comment’s author’s website or `null`.
@apiSuccess {String} hash
A hash uniquely identifying the comment’s author.
@apiSuccess {Number} created
UNIX timestamp of the time the comment was created (on the server).
@apiSuccess {Number} modified
UNIX timestamp of the time the comment was last modified (on the server). `null` if the comment was not yet modified.
"""
"""
@apiDefine admin Admin access needed
Only available to a logged-in site admin. Requires a valid `admin-session` cookie.
"""
"""
@api {post} /new create new
@apiGroup Comment
@apiName new
@apiVersion 0.12.6
@apiDescription
Creates a new comment. The server issues a cookie per new comment which acts as
an authentication token to modify or delete the comment.
The token is cryptographically signed and expires automatically after 900 seconds (=15min) by default.
@apiUse csrf
@apiQuery {String} uri
The uri of the thread to create the comment on.
@apiBody {String{3...65535}} text
The comment’s raw text.
@apiBody {String} [author]
The comment’s author’s name.
@apiBody {String{...254}} [email]
The comment’s author’s email address.
@apiBody {String{...254}} [website]
The comment’s author’s website’s url. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiBody {Number} [parent]
The parent comment’s id if the new comment is a response to an existing comment.
@apiExample {curl} Create a reply to comment with id 15:
curl 'https://comments.example.com/new?uri=/thread/' -d '{"text": "Stop saying that! *isso*!", "author": "Max Rant", "email": "[email protected]", "parent": 15}' -H 'Content-Type: application/json' -c cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Success after the above request:
HTTP/1.1 201 CREATED
Set-Cookie: 1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
X-Set-Cookie: isso-1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
{
"website": null,
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>Stop saying that! <em>isso</em>!</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "e644f6ee43c0",
"id": 23,
"likes": 0
}
"""
@xhr
@requires(str, 'uri')
def new(self, environ, request, uri):
data = request.json
for field in set(data.keys()) - API.ACCEPT:
data.pop(field)
for key in ("author", "email", "website", "parent"):
data.setdefault(key, None)
valid, reason = API.verify(data)
if not valid:
return BadRequest(reason)
for field in ("author", "email", "website"):
if data.get(field) is not None:
data[field] = escape(data[field], quote=False)
if data.get("website"):
data["website"] = normalize(data["website"])
data['mode'] = 2 if self.moderated else 1
data['remote_addr'] = self._remote_addr(request)
with self.isso.lock:
if uri not in self.threads:
if not data.get('title'):
with http.curl('GET', local("origin"), uri) as resp:
if resp and resp.status == 200:
uri, title = parse.thread(resp.read(), id=uri)
else:
return NotFound('URI does not exist %s')
else:
title = data['title']
thread = self.threads.new(uri, title)
self.signal("comments.new:new-thread", thread)
else:
thread = self.threads[uri]
# notify extensions that the new comment is about to save
self.signal("comments.new:before-save", thread, data)
valid, reason = self.guard.validate(uri, data)
if not valid:
self.signal("comments.new:guard", reason)
raise Forbidden(reason)
with self.isso.lock:
# if email-based auto-moderation enabled, check for previously approved author
# right before approval.
if self.approve_if_email_previously_approved and self.comments.is_previously_approved_author(data['email']):
data['mode'] = 1
rv = self.comments.add(uri, data)
# notify extension, that the new comment has been successfully saved
self.signal("comments.new:after-save", thread, rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
rv["hash"] = self.hash(rv['email'] or rv['remote_addr'])
self.cache.set(
'hash', (rv['email'] or rv['remote_addr']).encode('utf-8'), rv['hash'])
rv = self._add_gravatar_image(rv)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
# success!
self.signal("comments.new:finish", thread, rv)
resp = JSON(rv, 202 if rv["mode"] == 2 else 201)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
def _remote_addr(self, request):
"""Return the anonymized IP address of the requester.
Takes into consideration a potential X-Forwarded-For HTTP header
if a necessary server.trusted-proxies configuration entry is set.
Recipe source: https://stackoverflow.com/a/22936947/636849
"""
remote_addr = request.remote_addr
if self.trusted_proxies:
route = request.access_route + [remote_addr]
remote_addr = next((addr for addr in reversed(route)
if addr not in self.trusted_proxies), remote_addr)
return utils.anonymize(str(remote_addr))
def create_cookie(self, **kwargs):
"""
Setting cookies to SameSite=None requires "Secure" attribute.
For http-only, we need to override the dump_cookie() default SameSite=None
or the cookie will be rejected.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite#samesitenone_requires_secure
"""
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
samesite = self.isso.conf.get("server", "samesite")
if isso_host_script.startswith("https://"):
secure = True
samesite = samesite or "None"
else:
secure = False
samesite = samesite or "Lax"
return functools.partial(dump_cookie, **kwargs,
secure=secure, samesite=samesite)
"""
@api {get} /id/:id view
@apiGroup Comment
@apiName view
@apiVersion 0.12.6
@apiDescription
View an existing comment, for the purpose of editing. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
@apiParam {Number} id
The id of the comment to view.
@apiUse plainParam
@apiExample {curl} View the comment with id 4:
curl 'https://comments.example.com/id/4' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample Example result:
{
"website": null,
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 1
}
"""
def view(self, environ, request, id):
rv = self.comments.get(id)
if rv is None:
raise NotFound
try:
self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
if request.args.get('plain', '0') == '0':
rv['text'] = self.isso.render(rv['text'])
return JSON(rv, 200)
"""
@api {put} /id/:id edit
@apiGroup Comment
@apiName edit
@apiVersion 0.12.6
@apiDescription
Edit an existing comment. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details. Editing a comment will set a new edit cookie in the response.
@apiUse csrf
@apiParam {Number} id
The id of the comment to edit.
@apiBody {String{3...65535}} text
A new (raw) text for the comment.
@apiBody {String} [author]
The modified comment’s author’s name.
@apiBody {String{...254}} [website]
The modified comment’s author’s website. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiExample {curl} Edit comment with id 23:
curl -X PUT 'https://comments.example.com/id/23' -d {"text": "I see your point. However, I still disagree.", "website": "maxrant.important.com"} -H 'Content-Type: application/json' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Example response:
HTTP/1.1 200 OK
{
"website": "maxrant.important.com",
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>I see your point. However, I still disagree.</p>",
"dislikes": 0,
"modified": 1464943439.073961,
"mode": 1,
"id": 23,
"likes": 0
}
"""
@xhr
def edit(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
data = request.json
if data.get("text") is None or len(data["text"]) < 3:
raise BadRequest("no text given")
for key in set(data.keys()) - set(["text", "author", "website"]):
data.pop(key)
data['modified'] = time.time()
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
resp = JSON(rv, 200)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
"""
@api {delete} /id/:id delete
@apiGroup Comment
@apiName delete
@apiVersion 0.12.6
@apiDescription
Delete an existing comment. Deleting a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
Returns either `null` or a comment with an empty text value when the comment is still referenced by other comments.
@apiUse csrf
@apiParam {Number} id
Id of the comment to delete.
@apiExample {curl} Delete comment with id 14:
curl -X DELETE 'https://comments.example.com/id/14' -b cookie.txt
@apiSuccessExample Successful deletion returns null and deletes cookie:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
null
@apiSuccessExample {json} Comment still referenced by another:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
{
"id": 14,
"parent": null,
"created": 1653432621.0512516,
"modified": 1653434488.571937,
"mode": 4,
"text": "",
"author": null,
"website": null,
"likes": 0,
"dislikes": 0,
"notification": 0
}
"""
@xhr
def delete(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ""))
except (SignatureExpired, BadSignature):
raise Forbidden
else:
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
with self.isso.lock:
rv = self.comments.delete(id)
if rv:
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.delete", id)
resp = JSON(rv, 200)
cookie = self.create_cookie(expires=0, max_age=0)
resp.headers.add("Set-Cookie", cookie(str(id)))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % id))
return resp
"""
@api {get} /id/:id/unsubscribe/:email/:key unsubscribe
@apiGroup Comment
@apiName unsubscribe
@apiVersion 0.12.6
@apiDescription
Opt out from getting any further email notifications about replies to a particular comment. In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by isso.
@apiParam {Number} id
The id of the comment to unsubscribe from replies to.
@apiParam {String} email
The email address of the subscriber.
@apiParam {String} key
The key to authenticate the subscriber.
@apiExample {curl} Unsubscribe Alice from replies to comment with id 13:
curl -X GET 'https://comments.example.com/id/13/unsubscribe/[email protected]/WyJ1bnN1YnNjcmliZSIsImFsaWNlQGV4YW1wbGUuY29tIl0.DdcH9w.Wxou-l22ySLFkKUs7RUHnoM8Kos'
@apiSuccessExample {html} Using GET:
<!DOCTYPE html>
<html>
<head>Successfully unsubscribed</head>
<body>
<p>You have been unsubscribed from replies in the given conversation.</p>
</body>
</html>
"""
def unsubscribe(self, environ, request, id, email, key):
email = unquote(email)
try:
rv = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
if not isinstance(rv, list) or len(rv) != 2:
raise Forbidden
if rv[0] != 'unsubscribe' or rv[1] != email:
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
with self.isso.lock:
self.comments.unsubscribe(email, id)
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
" <title>Successfully unsubscribed</title>"
"</head>"
"<body>"
" <p>You have been unsubscribed from replies in the given conversation.</p>"
"</body>"
"</html>")
return Response(modal, 200, content_type="text/html")
"""
@api {post} /id/:id/:action/:key moderate
@apiGroup Comment
@apiName moderate
@apiVersion 0.12.6
@apiDescription
Publish or delete a comment that is in the moderation queue (mode `2`). In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by Isso or provided in the admin interface.
This endpoint can also be used with a `GET` request. In that case, a html page is returned that asks the user whether they are sure to perform the selected action. If they select “yes”, the query is repeated using `POST`.
@apiParam {Number} id
The id of the comment to moderate.
@apiParam {String=activate,edit,delete} action
- `activate` to publish the comment (change its mode to `1`).
- `edit`: Send `text`, `author`, `email` and `website` via `POST`.
To be used from the admin interface. Better use the `edit` `PUT` endpoint.
- `delete` to delete the comment.
@apiParam {String} key
The moderation key to authenticate the moderation.
@apiExample {curl} delete comment with id 13:
curl -X POST 'https://comments.example.com/id/13/delete/MTM.CjL6Fg.REIdVXa-whJS_x8ojQL4RrXnuF4'
@apiSuccessExample {html} Request deletion using GET:
<!DOCTYPE html>
<html>
<head>
<script>
if (confirm('Delete: Are you sure?')) {
xhr = new XMLHttpRequest;
xhr.open('POST', window.location.href);
xhr.send(null);
xhr.onload = function() {
window.location.href = "https://example.com/example-thread/#isso-13";
};
}
</script>
@apiSuccessExample Delete using POST:
Comment has been deleted
@apiSuccessExample Activate using POST:
Comment has been activated
"""
def moderate(self, environ, request, id, action, key):
try:
id = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
thread = self.threads.get(item['tid'])
link = local("origin") + thread["uri"] + "#isso-%i" % item["id"]
if request.method == "GET":
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
"<script>"
" if (confirm('%s: Are you sure?')) {"
" xhr = new XMLHttpRequest;"
" xhr.open('POST', window.location.href);"
" xhr.send(null);"
" xhr.onload = function() {"
" window.location.href = %s;"
" };"
" }"
"</script>" % (action.capitalize(), json.dumps(link)))
return Response(modal, 200, content_type="text/html")
if action == "activate":
if item['mode'] == 1:
return Response("Already activated", 200)
with self.isso.lock:
self.comments.activate(id)
self.signal("comments.activate", thread, item)
return Response("Comment has been activated", 200)
elif action == "edit":
data = request.json
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
return JSON(rv, 200)
else:
with self.isso.lock:
self.comments.delete(id)
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
self.signal("comments.delete", id)
return Response("Comment has been deleted", 200)
"""
@api {get} / Get comments
@apiGroup Thread
@apiName fetch
@apiVersion 0.12.6
@apiDescription Queries the publicly visible comments of a thread.
@apiQuery {String} uri
The URI of thread to get the comments from.
@apiQuery {Number} [parent]
Return only comments that are children of the comment with the provided ID.
@apiUse plainParam
@apiQuery {Number} [limit]
The maximum number of returned top-level comments. Omit for unlimited results.
@apiQuery {Number} [nested_limit]
The maximum number of returned nested comments per comment. Omit for unlimited results.
@apiQuery {Number} [after]
Includes only comments were added after the provided UNIX timestamp.
@apiSuccess {Number} id
Id of the comment `replies` is the list of replies of. `null` for the list of top-level comments.
@apiSuccess {Number} total_replies
The number of replies if the `limit` parameter was not set. If `after` is set to `X`, this is the number of comments that were created after `X`. So setting `after` may change this value!
@apiSuccess {Number} hidden_replies
The number of comments that were omitted from the results because of the `limit` request parameter. Usually, this will be `total_replies` - `limit`.
@apiSuccess {Object[]} replies
The list of comments. Each comment also has the `total_replies`, `replies`, `id` and `hidden_replies` properties to represent nested comments.
@apiSuccess {Object[]} config
Object holding only the client configuration parameters that depend on server settings. Will be dropped in a future version of Isso. Use the dedicated `/config` endpoint instead.
@apiExample {curl} Get 2 comments with 5 responses:
curl 'https://comments.example.com/?uri=/thread/&limit=2&nested_limit=5'
@apiSuccessExample {json} Example response:
{
"total_replies": 14,
"replies": [
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.732863,
"text": "<p>Hello, World!</p>",
"total_replies": 1,
"hidden_replies": 0,
"dislikes": 2,
"modified": null,
"mode": 1,
"replies": [
{
"website": null,
"author": null,
"parent": 1,
"created": 1464818460.769638,
"text": "<p>Hi, now some Markdown: <em>Italic</em>, <strong>bold</strong>, <code>monospace</code>.</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "2af4e1a6c96a",
"id": 2,
"likes": 2
}
],
"hash": "1cb6cc0309a2",
"id": 1,
"likes": 2
},
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.80574,
"text": "<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Accusantium at commodi cum deserunt dolore, error fugiat harum incidunt, ipsa ipsum mollitia nam provident rerum sapiente suscipit tempora vitae? Est, qui?</p>",
"total_replies": 0,
"hidden_replies": 0,
"dislikes": 0,
"modified": null,
"mode": 1,
"replies": [],
"hash": "1cb6cc0309a2",
"id": 3,
"likes": 0
},
"id": null,
"hidden_replies": 12
}
"""
@requires(str, 'uri')
def fetch(self, environ, request, uri):
args = {
'uri': uri,
'after': request.args.get('after', 0)
}
try:
args['limit'] = int(request.args.get('limit'))
except TypeError:
args['limit'] = None
except ValueError:
return BadRequest("limit should be integer")
if request.args.get('parent') is not None:
try:
args['parent'] = int(request.args.get('parent'))
root_id = args['parent']
except ValueError:
return BadRequest("parent should be integer")
else:
args['parent'] = None
root_id = None
plain = request.args.get('plain', '0') == '0'
reply_counts = self.comments.reply_count(uri, after=args['after'])
if args['limit'] == 0:
root_list = []
else:
root_list = list(self.comments.fetch(**args))
if root_id not in reply_counts:
reply_counts[root_id] = 0
try:
nested_limit = int(request.args.get('nested_limit'))
except TypeError:
nested_limit = None
except ValueError:
return BadRequest("nested_limit should be integer")
rv = {
'id': root_id,
'total_replies': reply_counts[root_id],
'hidden_replies': reply_counts[root_id] - len(root_list),
'replies': self._process_fetched_list(root_list, plain),
'config': self.public_conf
}
# We are only checking for one level deep comments
if root_id is None:
for comment in rv['replies']:
if comment['id'] in reply_counts:
comment['total_replies'] = reply_counts[comment['id']]
if nested_limit is not None:
if nested_limit > 0:
args['parent'] = comment['id']
args['limit'] = nested_limit
replies = list(self.comments.fetch(**args))
else:
replies = []
else:
args['parent'] = comment['id']
replies = list(self.comments.fetch(**args))
else:
comment['total_replies'] = 0
replies = []
comment['hidden_replies'] = comment['total_replies'] - \
len(replies)
comment['replies'] = self._process_fetched_list(replies, plain)
return JSON(rv, 200)
def _add_gravatar_image(self, item):
if not self.conf.getboolean('gravatar'):
return item
email = item['email'] or item['author'] or ''
email_md5_hash = md5(email)
gravatar_url = self.conf.get('gravatar-url')
item['gravatar_image'] = gravatar_url.format(email_md5_hash)
return item
def _process_fetched_list(self, fetched_list, plain=False):
for item in fetched_list:
key = item['email'] or item['remote_addr']
val = self.cache.get('hash', key.encode('utf-8'))
if val is None:
val = self.hash(key)
self.cache.set('hash', key.encode('utf-8'), val)
item['hash'] = val
item = self._add_gravatar_image(item)
for key in set(item.keys()) - API.FIELDS:
item.pop(key)
if plain:
for item in fetched_list:
item['text'] = self.isso.render(item['text'])
return fetched_list
"""
@apiDefine likeResponse
@apiSuccess {Number} likes
The (new) number of likes on the comment.
@apiSuccess {Number} dislikes
The (new) number of dislikes on the comment.
@apiSuccessExample Return updated vote counts:
{
"likes": 4,
"dislikes": 3
}
"""
"""
@api {post} /id/:id/like like
@apiGroup Comment
@apiName like
@apiVersion 0.12.6
@apiDescription
Puts a “like” on a comment. The author of a comment cannot like their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to like.
@apiExample {curl} Like comment with id 23:
curl -X POST 'https://comments.example.com/id/23/like'
@apiUse likeResponse
"""
@xhr
def like(self, environ, request, id):
nv = self.comments.vote(True, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /id/:id/dislike dislike
@apiGroup Comment
@apiName dislike
@apiVersion 0.12.6
@apiDescription
Puts a “dislike” on a comment. The author of a comment cannot dislike their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to dislike.
@apiExample {curl} Dislike comment with id 23:
curl -X POST 'https://comments.example.com/id/23/dislike'
@apiUse likeResponse
"""
@xhr
def dislike(self, environ, request, id):
nv = self.comments.vote(False, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /preview preview
@apiGroup Comment
@apiName preview
@apiVersion 0.12.6
@apiDescription
Render comment text using markdown.
@apiBody {String{3...65535}} text
(Raw) comment text
@apiSuccess {String} text
Rendered comment text
@apiExample {curl} Preview comment:
curl -X POST 'https://comments.example.com/preview' -d '{"text": "A sample comment"}'
@apiSuccessExample {json} Rendered comment:
{
"text": "<p>A sample comment</p>"
}
"""
def preview(self, environment, request):
data = request.json
if data.get("text", None) is None:
raise BadRequest("no text given")
return JSON({'text': self.isso.render(data["text"])}, 200)
"""
@api {post} /count Count comments
@apiGroup Thread
@apiName counts
@apiVersion 0.12.6
@apiDescription
Counts the number of comments on multiple threads. The requestor provides a list of thread uris. The number of comments on each thread is returned as a list, in the same order as the threads were requested. The counts include comments that are responses to comments, but only published comments (i.e. exclusing comments pending moderation).
@apiBody {Number[]} urls
Array of URLs for which to fetch comment counts
@apiExample {curl} Get the respective counts of 5 threads:
curl -X POST 'https://comments.example.com/count' -d '["/blog/firstPost.html", "/blog/controversalPost.html", "/blog/howToCode.html", "/blog/boringPost.html", "/blog/isso.html"]
@apiSuccessExample {json} Counts of 5 threads:
[2, 18, 4, 0, 3]
"""
def counts(self, environ, request):
data = request.json
if not isinstance(data, list) and not all(isinstance(x, str) for x in data):
raise BadRequest("JSON must be a list of URLs")
return JSON(self.comments.count(*data), 200)
"""
@api {get} /feed Atom feed for comments
@apiGroup Thread
@apiName feed
@apiVersion 0.12.6
@apiDescription
Provide an Atom feed for the given thread. Only available if `[rss] base` is set in server config. By default, up to 100 comments are returned.
@apiQuery {String} uri
The uri of the thread to display a feed for
@apiExample {curl} Get an Atom feed for /thread/foo in XML format:
curl 'https://comments.example.com/feed?uri=/thread/foo'
@apiSuccessExample Atom feed for /thread/foo:
<?xml version='1.0' encoding='utf-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:thr="http://purl.org/syndication/thread/1.0">
<updated>2022-05-24T20:38:04.032789Z</updated>
<id>tag:example.com,2018:/isso/thread/thread/foo</id>
<title>Comments for example.com/thread/foo</title>
<entry>
<id>tag:example.com,2018:/isso/1/2</id>
<title>Comment #2</title>
<updated>2022-05-24T20:38:04.032789Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-2" />
<content type="html"><p>And another</p></content>
</entry>
<entry>
<id>tag:example.com,2018:/isso/1/1</id>
<title>Comment #1</title>
<updated>2022-05-24T20:38:00.837703Z</updated>
<author>
<name>Jane Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-1" />
<content type="html"><p>A sample comment</p></content>
</entry>
</feed>
"""
@requires(str, 'uri')
def feed(self, environ, request, uri):
conf = self.isso.conf.section("rss")
if not conf.get('base'):
raise NotFound
args = {
'uri': uri,
'order_by': 'id',
'asc': 0,
'limit': conf.getint('limit')
}
try:
args['limit'] = max(int(request.args.get('limit')), args['limit'])
except TypeError:
pass
except ValueError:
return BadRequest("limit should be integer")
comments = self.comments.fetch(**args)
base = conf.get('base').rstrip('/')
hostname = urlparse(base).netloc
# Let's build an Atom feed.
# RFC 4287: https://tools.ietf.org/html/rfc4287
# RFC 4685: https://tools.ietf.org/html/rfc4685 (threading extensions)
# For IDs: http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id
feed = ET.Element('feed', {
'xmlns': 'http://www.w3.org/2005/Atom',
'xmlns:thr': 'http://purl.org/syndication/thread/1.0'
})
# For feed ID, we would use thread ID, but we may not have
# one. Therefore, we use the URI. We don't have a year
# either...
id = ET.SubElement(feed, 'id')
id.text = 'tag:{hostname},2018:/isso/thread{uri}'.format(
hostname=hostname, uri=uri)
# For title, we don't have much either. Be pretty generic.
title = ET.SubElement(feed, 'title')
title.text = 'Comments for {hostname}{uri}'.format(
hostname=hostname, uri=uri)
comment0 = None
for comment in comments:
if comment0 is None:
comment0 = comment
entry = ET.SubElement(feed, 'entry')
# We don't use a real date in ID either to help with
# threading.
id = ET.SubElement(entry, 'id')
id.text = 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['id'])
title = ET.SubElement(entry, 'title')
title.text = 'Comment #{}'.format(comment['id'])
updated = ET.SubElement(entry, 'updated')
updated.text = '{}Z'.format(datetime.fromtimestamp(
comment['modified'] or comment['created']).isoformat())
author = ET.SubElement(entry, 'author')
name = ET.SubElement(author, 'name')
name.text = comment['author']
ET.SubElement(entry, 'link', {
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['id'])
})
content = ET.SubElement(entry, 'content', {
'type': 'html',
})
content.text = self.isso.render(comment['text'])
if comment['parent']:
ET.SubElement(entry, 'thr:in-reply-to', {
'ref': 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['parent']),
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['parent'])
})
# Updated is mandatory. If we have comments, we use the date
# of last modification of the first one (which is the last
# one). Otherwise, we use a fixed date.
updated = ET.Element('updated')
if comment0 is None:
updated.text = '1970-01-01T01:00:00Z'
else:
updated.text = datetime.fromtimestamp(
comment0['modified'] or comment0['created']).isoformat()
updated.text += 'Z'
feed.insert(0, updated)
output = StringIO()
ET.ElementTree(feed).write(output,
encoding='utf-8',
xml_declaration=True)
response = XML(output.getvalue(), 200)
# Add an etag/last-modified value for caching purpose
if comment0 is None:
response.set_etag('empty')
response.last_modified = 0
else:
response.set_etag('{tid}-{id}'.format(**comment0))
response.last_modified = comment0['modified'] or comment0['created']
return response.make_conditional(request)
"""
@api {get} /config Fetch client config
@apiGroup Thread
@apiName config
@apiVersion 0.13.0
@apiDescription
Returns only the client configuration parameters that depend on server settings.
@apiSuccess {Object[]} config
The client configuration object.
@apiSuccess {Boolean} config.reply-to-self
Commenters can reply to their own comments.
@apiSuccess {Boolean} config.require-author
Commenters must enter valid Name.
@apiSuccess {Boolean} config.require-email
Commenters must enter valid email.
@apiSuccess {Boolean} config.reply-notifications
Enable reply notifications via E-mail.
@apiSuccess {Boolean} config.gravatar
Load images from Gravatar service instead of generating them. Also disables regular avatars (see below).
@apiSuccess {Boolean} config.avatar
To avoid having both regular avatars and Gravatars side-by-side,
setting `gravatar` will disable regular avatars. The `avatar` key will
only be sent by the server if `gravatar` is set.
@apiSuccess {Boolean} config.feed
Enable or disable the addition of a link to the feed for the comment
thread.
@apiExample {curl} get the client config:
curl 'https://comments.example.com/config'
@apiSuccessExample {json} Client config:
{
"config": {
"reply-to-self": false,
"require-email": false,
"require-author": false,
"reply-notifications": false,
"gravatar": true,
"avatar": false,
"feed": false
}
}
"""
def config(self, environment, request):
rv = {'config': self.public_conf}
return JSON(rv, 200)
"""
@api {get} /demo/ Isso demo page
@apiGroup Demo
@apiName demo
@apiVersion 0.13.0
@apiPrivate
@apiDescription
Displays a demonstration of Isso with a thread counter and comment widget.
@apiExample {curl} Get demo page
curl 'https://comments.example.com/demo/'
@apiSuccessExample {html} Demo page:
<!DOCTYPE html>
<head>
<title>Isso Demo</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div id="page">
<div id="wrapper" style="max-width: 900px; margin-left: auto; margin-right: auto;">
<h2><a href=".">Isso Demo</a></h2>
<script src="../js/embed.dev.js" data-isso="../" ></script>
<section>
<p>This is a link to a thead, which will display a comment counter:
<a href=".#isso-thread">How many Comments?</a></p>
<p>Below is the actual comment field.</p>
</section>
<section id="isso-thread" data-title="Isso Test"><noscript>Javascript needs to be activated to view comments.</noscript></section>
</div>
</div>
</body>
"""
def demo(self, env, req):
index = pkg_resources.resource_filename('isso', 'demo/index.html')
return send_from_directory(os_path.dirname(index), 'index.html', env)
"""
@api {post} /login/ Log in
@apiGroup Admin
@apiName login
@apiVersion 0.12.6
@apiPrivate
@apiDescription
Log in to admin, will redirect to `/admin/` on success. Must use form data, not `POST` JSON.
@apiBody {String} password
The admin password as set in `[admin] password` in the server config.
@apiExample {curl} Log in
curl -X POST 'https://comments.example.com/login' -F "password=strong_default_password_for_isso_admin" -c cookie.txt
@apiSuccessExample {html} Login successful:
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="https://comments.example.com/admin/">https://comments.example.com/admin/</a>. If not, click the link.
"""
def login(self, env, req):
if not self.isso.conf.getboolean("admin", "enabled"):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('disabled.html', isso_host_script=isso_host_script)
data = req.form
password = self.isso.conf.get("admin", "password")
if data['password'] and data['password'] == password:
response = redirect(re.sub(
r'/login/$',
'/admin/',
get_current_url(env, strip_querystring=True)
))
cookie = self.create_cookie(value=self.isso.sign({"logged": True}),
expires=datetime.now() + timedelta(1))
response.headers.add("Set-Cookie", cookie("admin-session"))
response.headers.add("X-Set-Cookie", cookie("isso-admin-session"))
return response
else:
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('login.html', isso_host_script=isso_host_script)
"""
@api {get} /admin/ Admin interface
@apiGroup Admin
@apiName admin
@apiVersion 0.12.6
@apiPrivate
@apiPermission admin
@apiDescription
Display an admin interface from which to manage comments. Will redirect to `/login` if not already logged in.
@apiQuery {Number} [page=0]
Page number
@apiQuery {Number{1,2,4}} [mode=2]
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiQuery {String{id,created,modified,likes,dislikes,tid}} [order_by=created]
Comment ordering
@apiQuery {Number{0,1}} [asc=0]
Ascending
@apiExample {curl} Listing of published comments:
curl 'https://comments.example.com/admin/?mode=1&page=0&order_by=modified&asc=1' -b cookie.txt
"""
def admin(self, env, req):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
if not self.isso.conf.getboolean("admin", "enabled"):
return render_template('disabled.html', isso_host_script=isso_host_script)
try:
data = self.isso.unsign(req.cookies.get('admin-session', ''),
max_age=60 * 60 * 24)
except BadSignature:
return render_template('login.html', isso_host_script=isso_host_script)
if not data or not data['logged']:
return render_template('login.html', isso_host_script=isso_host_script)
page_size = 100
page = int(req.args.get('page', 0))
order_by = req.args.get('order_by', 'created')
asc = int(req.args.get('asc', 0))
mode = int(req.args.get('mode', 2))
comments = self.comments.fetchall(mode=mode, page=page,
limit=page_size,
order_by=order_by,
asc=asc)
comments_enriched = []
for comment in list(comments):
comment['hash'] = self.isso.sign(comment['id'])
comments_enriched.append(comment)
comment_mode_count = self.comments.count_modes()
max_page = int(sum(comment_mode_count.values()) / 100)
return render_template('admin.html', comments=comments_enriched,
page=int(page), mode=int(mode),
conf=self.conf, max_page=max_page,
counts=comment_mode_count,
order_by=order_by, asc=asc,
isso_host_script=isso_host_script)
"""
@api {get} /latest latest
@apiGroup Comment
@apiName latest
@apiVersion 0.12.6
@apiDescription
Get the latest comments from the system, no matter which thread. Only available if `[general] latest-enabled` is set to `true` in server config.
@apiQuery {Number} limit
The quantity of last comments to retrieve
@apiExample {curl} Get the latest 5 comments
curl 'https://comments.example.com/latest?limit=5'
@apiUse commentResponse
@apiSuccessExample Example result:
[
{
"website": null,
"uri": "/some",
"author": null,
"parent": null,
"created": 1464912312.123416,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 3,
"likes": 1
},
{
"website": null,
"uri": "/other",
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 0
}
]
"""
def latest(self, environ, request):
# if the feature is not allowed, don't present the endpoint
if not self.conf.getboolean("latest-enabled"):
return NotFound(
"Unavailable because 'latest-enabled' not set by site admin"
)
# get and check the limit
bad_limit_msg = "Query parameter 'limit' is mandatory (integer, >0)"
try:
limit = int(request.args['limit'])
except (KeyError, ValueError):
return BadRequest(bad_limit_msg)
if limit <= 0:
return BadRequest(bad_limit_msg)
# retrieve the latest N comments from the DB
all_comments_gen = self.comments.fetchall(limit=None, order_by='created', mode='1')
comments = collections.deque(all_comments_gen, maxlen=limit)
# prepare a special set of fields (except text which is rendered specifically)
fields = {
'author',
'created',
'dislikes',
'id',
'likes',
'mode',
'modified',
'parent',
'text',
'uri',
'website',
}
# process the retrieved comments and build results
result = []
for comment in comments:
processed = {key: comment[key] for key in fields}
processed['text'] = self.isso.render(comment['text'])
result.append(processed)
return JSON(result, 200)
| schneidr | 9a0e1867e4ebe7e1ee7106584adb29b16880a955 | 73d9886100fd56cbceb38e2e00b84f52f0328a8c | My bad, the reason for removing the port was not urlparse, it is `validators.domain()` which does not accept `domain:port`. I guess I could clean this up by using `hostname` instead of `netloc`, but if the complete URL is supposed to be validated these lines are most probably not staying anyway. | schneidr | 1 |
posativ/isso | 952 | Allow umlaut domains for website addresses | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [x] (If adding features:) I have added tests to cover my changes
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
Changed website validation to allow domain names containing umlauts
## Why is this necessary?
Resolves issue #951 | null | 2023-04-18 06:57:32+00:00 | 2023-08-04 13:01:56+00:00 | isso/views/comments.py | # -*- encoding: utf-8 -*-
import collections
import re
import time
import functools
import json # json.dumps to put URL in <script>
import pkg_resources
from configparser import NoOptionError
from datetime import datetime, timedelta
from html import escape
from io import BytesIO as StringIO
from os import path as os_path
from urllib.parse import unquote, urlparse
from xml.etree import ElementTree as ET
from itsdangerous import SignatureExpired, BadSignature
from werkzeug.exceptions import BadRequest, Forbidden, NotFound
from werkzeug.http import dump_cookie
from werkzeug.routing import Rule
from werkzeug.utils import redirect, send_from_directory
from werkzeug.wrappers import Response
from werkzeug.wsgi import get_current_url
from isso import utils, local
from isso.utils import (http, parse,
JSONResponse as JSON, XMLResponse as XML,
render_template)
from isso.utils.hash import md5, sha1
from isso.views import requires
# from Django apparently, looks good to me *duck*
__url_re = re.compile(
r'^'
r'(https?://)?'
# domain...
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)'
r'$', re.IGNORECASE)
def isurl(text):
return __url_re.match(text) is not None
def normalize(url):
if not url.startswith(("http://", "https://")):
return "http://" + url
return url
def xhr(func):
"""A decorator to check for CSRF on POST/PUT/DELETE using a <form>
element and JS to execute automatically (see #40 for a proof-of-concept).
When an attacker uses a <form> to downvote a comment, the browser *should*
add a `Content-Type: ...` header with three possible values:
* application/x-www-form-urlencoded
* multipart/form-data
* text/plain
If the header is not sent or requests `application/json`, the request is
not forged (XHR is restricted by CORS separately).
"""
"""
@apiDefine csrf
@apiHeader {String="application/json"} Content-Type
The content type must be set to `application/json` to prevent CSRF attacks.
"""
def dec(self, env, req, *args, **kwargs):
if req.content_type and not req.content_type.startswith("application/json"):
raise Forbidden("CSRF")
return func(self, env, req, *args, **kwargs)
return dec
class API(object):
FIELDS = set(['id', 'parent', 'text', 'author', 'website',
'mode', 'created', 'modified', 'likes', 'dislikes', 'hash', 'gravatar_image', 'notification'])
# comment fields, that can be submitted
ACCEPT = set(['text', 'author', 'website', 'email', 'parent', 'title', 'notification'])
VIEWS = [
('fetch', ('GET', '/')),
('new', ('POST', '/new')),
('counts', ('POST', '/count')),
('feed', ('GET', '/feed')),
('latest', ('GET', '/latest')),
('view', ('GET', '/id/<int:id>')),
('edit', ('PUT', '/id/<int:id>')),
('delete', ('DELETE', '/id/<int:id>')),
('unsubscribe', ('GET', '/id/<int:id>/unsubscribe/<string:email>/<string:key>')),
('moderate', ('GET', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('moderate', ('POST', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('like', ('POST', '/id/<int:id>/like')),
('dislike', ('POST', '/id/<int:id>/dislike')),
('demo', ('GET', '/demo/')),
('preview', ('POST', '/preview')),
('config', ('GET', '/config')),
('login', ('POST', '/login/')),
('admin', ('GET', '/admin/'))
]
def __init__(self, isso, hasher):
self.isso = isso
self.hash = hasher.uhash
self.cache = isso.cache
self.signal = isso.signal
self.conf = isso.conf.section("general")
self.moderated = isso.conf.getboolean("moderation", "enabled")
# this is similar to the wordpress setting "Comment author must have a previously approved comment"
try:
self.approve_if_email_previously_approved = isso.conf.getboolean("moderation", "approve-if-email-previously-approved")
except NoOptionError:
self.approve_if_email_previously_approved = False
try:
self.trusted_proxies = list(isso.conf.getiter("server", "trusted-proxies"))
except NoOptionError:
self.trusted_proxies = []
# These configuration records can be read out by client
self.public_conf = {}
self.public_conf["reply-to-self"] = isso.conf.getboolean("guard", "reply-to-self")
self.public_conf["require-email"] = isso.conf.getboolean("guard", "require-email")
self.public_conf["require-author"] = isso.conf.getboolean("guard", "require-author")
self.public_conf["reply-notifications"] = isso.conf.getboolean("general", "reply-notifications")
self.public_conf["gravatar"] = isso.conf.getboolean("general", "gravatar")
if self.public_conf["gravatar"]:
self.public_conf["avatar"] = False
self.public_conf["feed"] = False
rss = isso.conf.section("rss")
if rss and rss.get('base'):
self.public_conf["feed"] = True
self.guard = isso.db.guard
self.threads = isso.db.threads
self.comments = isso.db.comments
for (view, (method, path)) in self.VIEWS:
isso.urls.add(
Rule(path, methods=[method], endpoint=getattr(self, view)))
@classmethod
def verify(cls, comment):
if comment.get("text") is None:
return False, "text is missing"
if not isinstance(comment.get("parent"), (int, type(None))):
return False, "parent must be an integer or null"
for key in ("text", "author", "website", "email"):
if not isinstance(comment.get(key), (str, type(None))):
return False, "%s must be a string or null" % key
if len(comment["text"].rstrip()) < 3:
return False, "text is too short (minimum length: 3)"
if len(comment["text"]) > 65535:
return False, "text is too long (maximum length: 65535)"
if len(comment.get("email") or "") > 254:
return False, "http://tools.ietf.org/html/rfc5321#section-4.5.3"
if comment.get("website"):
if len(comment["website"]) > 254:
return False, "arbitrary length limit"
if not isurl(comment["website"]):
return False, "Website not Django-conform"
return True, ""
# Common definitions for apidoc follow:
"""
@apiDefine plainParam
@apiQuery {Number=0,1} [plain=0]
If set to `1`, the plain text entered by the user will be returned in the comments’ `text` attribute (instead of the rendered markdown).
"""
"""
@apiDefine commentResponse
@apiSuccess {Number} id
The comment’s id (assigned by the server).
@apiSuccess {Number} parent
Id of the comment this comment is a reply to. `null` if this is a top-level-comment.
@apiSuccess {Number=1,2,4} mode
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiSuccess {String} author
The comments’s author’s name or `null`.
@apiSuccess {String} website
The comment’s author’s website or `null`.
@apiSuccess {String} hash
A hash uniquely identifying the comment’s author.
@apiSuccess {Number} created
UNIX timestamp of the time the comment was created (on the server).
@apiSuccess {Number} modified
UNIX timestamp of the time the comment was last modified (on the server). `null` if the comment was not yet modified.
"""
"""
@apiDefine admin Admin access needed
Only available to a logged-in site admin. Requires a valid `admin-session` cookie.
"""
"""
@api {post} /new create new
@apiGroup Comment
@apiName new
@apiVersion 0.12.6
@apiDescription
Creates a new comment. The server issues a cookie per new comment which acts as
an authentication token to modify or delete the comment.
The token is cryptographically signed and expires automatically after 900 seconds (=15min) by default.
@apiUse csrf
@apiQuery {String} uri
The uri of the thread to create the comment on.
@apiBody {String{3...65535}} text
The comment’s raw text.
@apiBody {String} [author]
The comment’s author’s name.
@apiBody {String{...254}} [email]
The comment’s author’s email address.
@apiBody {String{...254}} [website]
The comment’s author’s website’s url. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiBody {Number} [parent]
The parent comment’s id if the new comment is a response to an existing comment.
@apiExample {curl} Create a reply to comment with id 15:
curl 'https://comments.example.com/new?uri=/thread/' -d '{"text": "Stop saying that! *isso*!", "author": "Max Rant", "email": "[email protected]", "parent": 15}' -H 'Content-Type: application/json' -c cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Success after the above request:
HTTP/1.1 201 CREATED
Set-Cookie: 1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
X-Set-Cookie: isso-1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
{
"website": null,
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>Stop saying that! <em>isso</em>!</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "e644f6ee43c0",
"id": 23,
"likes": 0
}
"""
@xhr
@requires(str, 'uri')
def new(self, environ, request, uri):
data = request.json
for field in set(data.keys()) - API.ACCEPT:
data.pop(field)
for key in ("author", "email", "website", "parent"):
data.setdefault(key, None)
valid, reason = API.verify(data)
if not valid:
return BadRequest(reason)
for field in ("author", "email", "website"):
if data.get(field) is not None:
data[field] = escape(data[field], quote=False)
if data.get("website"):
data["website"] = normalize(data["website"])
data['mode'] = 2 if self.moderated else 1
data['remote_addr'] = self._remote_addr(request)
with self.isso.lock:
if uri not in self.threads:
if not data.get('title'):
with http.curl('GET', local("origin"), uri) as resp:
if resp and resp.status == 200:
uri, title = parse.thread(resp.read(), id=uri)
else:
return NotFound('URI does not exist %s')
else:
title = data['title']
thread = self.threads.new(uri, title)
self.signal("comments.new:new-thread", thread)
else:
thread = self.threads[uri]
# notify extensions that the new comment is about to save
self.signal("comments.new:before-save", thread, data)
valid, reason = self.guard.validate(uri, data)
if not valid:
self.signal("comments.new:guard", reason)
raise Forbidden(reason)
with self.isso.lock:
# if email-based auto-moderation enabled, check for previously approved author
# right before approval.
if self.approve_if_email_previously_approved and self.comments.is_previously_approved_author(data['email']):
data['mode'] = 1
rv = self.comments.add(uri, data)
# notify extension, that the new comment has been successfully saved
self.signal("comments.new:after-save", thread, rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
rv["hash"] = self.hash(rv['email'] or rv['remote_addr'])
self.cache.set(
'hash', (rv['email'] or rv['remote_addr']).encode('utf-8'), rv['hash'])
rv = self._add_gravatar_image(rv)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
# success!
self.signal("comments.new:finish", thread, rv)
resp = JSON(rv, 202 if rv["mode"] == 2 else 201)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
def _remote_addr(self, request):
"""Return the anonymized IP address of the requester.
Takes into consideration a potential X-Forwarded-For HTTP header
if a necessary server.trusted-proxies configuration entry is set.
Recipe source: https://stackoverflow.com/a/22936947/636849
"""
remote_addr = request.remote_addr
if self.trusted_proxies:
route = request.access_route + [remote_addr]
remote_addr = next((addr for addr in reversed(route)
if addr not in self.trusted_proxies), remote_addr)
return utils.anonymize(str(remote_addr))
def create_cookie(self, **kwargs):
"""
Setting cookies to SameSite=None requires "Secure" attribute.
For http-only, we need to override the dump_cookie() default SameSite=None
or the cookie will be rejected.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite#samesitenone_requires_secure
"""
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
samesite = self.isso.conf.get("server", "samesite")
if isso_host_script.startswith("https://"):
secure = True
samesite = samesite or "None"
else:
secure = False
samesite = samesite or "Lax"
return functools.partial(dump_cookie, **kwargs,
secure=secure, samesite=samesite)
"""
@api {get} /id/:id view
@apiGroup Comment
@apiName view
@apiVersion 0.12.6
@apiDescription
View an existing comment, for the purpose of editing. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
@apiParam {Number} id
The id of the comment to view.
@apiUse plainParam
@apiExample {curl} View the comment with id 4:
curl 'https://comments.example.com/id/4' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample Example result:
{
"website": null,
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 1
}
"""
def view(self, environ, request, id):
rv = self.comments.get(id)
if rv is None:
raise NotFound
try:
self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
if request.args.get('plain', '0') == '0':
rv['text'] = self.isso.render(rv['text'])
return JSON(rv, 200)
"""
@api {put} /id/:id edit
@apiGroup Comment
@apiName edit
@apiVersion 0.12.6
@apiDescription
Edit an existing comment. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details. Editing a comment will set a new edit cookie in the response.
@apiUse csrf
@apiParam {Number} id
The id of the comment to edit.
@apiBody {String{3...65535}} text
A new (raw) text for the comment.
@apiBody {String} [author]
The modified comment’s author’s name.
@apiBody {String{...254}} [website]
The modified comment’s author’s website. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiExample {curl} Edit comment with id 23:
curl -X PUT 'https://comments.example.com/id/23' -d {"text": "I see your point. However, I still disagree.", "website": "maxrant.important.com"} -H 'Content-Type: application/json' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Example response:
HTTP/1.1 200 OK
{
"website": "maxrant.important.com",
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>I see your point. However, I still disagree.</p>",
"dislikes": 0,
"modified": 1464943439.073961,
"mode": 1,
"id": 23,
"likes": 0
}
"""
@xhr
def edit(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
data = request.json
if data.get("text") is None or len(data["text"]) < 3:
raise BadRequest("no text given")
for key in set(data.keys()) - set(["text", "author", "website"]):
data.pop(key)
data['modified'] = time.time()
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
resp = JSON(rv, 200)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
"""
@api {delete} /id/:id delete
@apiGroup Comment
@apiName delete
@apiVersion 0.12.6
@apiDescription
Delete an existing comment. Deleting a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
Returns either `null` or a comment with an empty text value when the comment is still referenced by other comments.
@apiUse csrf
@apiParam {Number} id
Id of the comment to delete.
@apiExample {curl} Delete comment with id 14:
curl -X DELETE 'https://comments.example.com/id/14' -b cookie.txt
@apiSuccessExample Successful deletion returns null and deletes cookie:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
null
@apiSuccessExample {json} Comment still referenced by another:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
{
"id": 14,
"parent": null,
"created": 1653432621.0512516,
"modified": 1653434488.571937,
"mode": 4,
"text": "",
"author": null,
"website": null,
"likes": 0,
"dislikes": 0,
"notification": 0
}
"""
@xhr
def delete(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ""))
except (SignatureExpired, BadSignature):
raise Forbidden
else:
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
with self.isso.lock:
rv = self.comments.delete(id)
if rv:
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.delete", id)
resp = JSON(rv, 200)
cookie = self.create_cookie(expires=0, max_age=0)
resp.headers.add("Set-Cookie", cookie(str(id)))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % id))
return resp
"""
@api {get} /id/:id/unsubscribe/:email/:key unsubscribe
@apiGroup Comment
@apiName unsubscribe
@apiVersion 0.12.6
@apiDescription
Opt out from getting any further email notifications about replies to a particular comment. In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by isso.
@apiParam {Number} id
The id of the comment to unsubscribe from replies to.
@apiParam {String} email
The email address of the subscriber.
@apiParam {String} key
The key to authenticate the subscriber.
@apiExample {curl} Unsubscribe Alice from replies to comment with id 13:
curl -X GET 'https://comments.example.com/id/13/unsubscribe/[email protected]/WyJ1bnN1YnNjcmliZSIsImFsaWNlQGV4YW1wbGUuY29tIl0.DdcH9w.Wxou-l22ySLFkKUs7RUHnoM8Kos'
@apiSuccessExample {html} Using GET:
<!DOCTYPE html>
<html>
<head>Successfully unsubscribed</head>
<body>
<p>You have been unsubscribed from replies in the given conversation.</p>
</body>
</html>
"""
def unsubscribe(self, environ, request, id, email, key):
email = unquote(email)
try:
rv = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
if not isinstance(rv, list) or len(rv) != 2:
raise Forbidden
if rv[0] != 'unsubscribe' or rv[1] != email:
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
with self.isso.lock:
self.comments.unsubscribe(email, id)
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
" <title>Successfully unsubscribed</title>"
"</head>"
"<body>"
" <p>You have been unsubscribed from replies in the given conversation.</p>"
"</body>"
"</html>")
return Response(modal, 200, content_type="text/html")
"""
@api {post} /id/:id/:action/:key moderate
@apiGroup Comment
@apiName moderate
@apiVersion 0.12.6
@apiDescription
Publish or delete a comment that is in the moderation queue (mode `2`). In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by Isso or provided in the admin interface.
This endpoint can also be used with a `GET` request. In that case, a html page is returned that asks the user whether they are sure to perform the selected action. If they select “yes”, the query is repeated using `POST`.
@apiParam {Number} id
The id of the comment to moderate.
@apiParam {String=activate,edit,delete} action
- `activate` to publish the comment (change its mode to `1`).
- `edit`: Send `text`, `author`, `email` and `website` via `POST`.
To be used from the admin interface. Better use the `edit` `PUT` endpoint.
- `delete` to delete the comment.
@apiParam {String} key
The moderation key to authenticate the moderation.
@apiExample {curl} delete comment with id 13:
curl -X POST 'https://comments.example.com/id/13/delete/MTM.CjL6Fg.REIdVXa-whJS_x8ojQL4RrXnuF4'
@apiSuccessExample {html} Request deletion using GET:
<!DOCTYPE html>
<html>
<head>
<script>
if (confirm('Delete: Are you sure?')) {
xhr = new XMLHttpRequest;
xhr.open('POST', window.location.href);
xhr.send(null);
xhr.onload = function() {
window.location.href = "https://example.com/example-thread/#isso-13";
};
}
</script>
@apiSuccessExample Delete using POST:
Comment has been deleted
@apiSuccessExample Activate using POST:
Comment has been activated
"""
def moderate(self, environ, request, id, action, key):
try:
id = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
thread = self.threads.get(item['tid'])
link = local("origin") + thread["uri"] + "#isso-%i" % item["id"]
if request.method == "GET":
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
"<script>"
" if (confirm('%s: Are you sure?')) {"
" xhr = new XMLHttpRequest;"
" xhr.open('POST', window.location.href);"
" xhr.send(null);"
" xhr.onload = function() {"
" window.location.href = %s;"
" };"
" }"
"</script>" % (action.capitalize(), json.dumps(link)))
return Response(modal, 200, content_type="text/html")
if action == "activate":
if item['mode'] == 1:
return Response("Already activated", 200)
with self.isso.lock:
self.comments.activate(id)
self.signal("comments.activate", thread, item)
return Response("Comment has been activated", 200)
elif action == "edit":
data = request.json
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
return JSON(rv, 200)
else:
with self.isso.lock:
self.comments.delete(id)
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
self.signal("comments.delete", id)
return Response("Comment has been deleted", 200)
"""
@api {get} / Get comments
@apiGroup Thread
@apiName fetch
@apiVersion 0.12.6
@apiDescription Queries the publicly visible comments of a thread.
@apiQuery {String} uri
The URI of thread to get the comments from.
@apiQuery {Number} [parent]
Return only comments that are children of the comment with the provided ID.
@apiUse plainParam
@apiQuery {Number} [limit]
The maximum number of returned top-level comments. Omit for unlimited results.
@apiQuery {Number} [nested_limit]
The maximum number of returned nested comments per comment. Omit for unlimited results.
@apiQuery {Number} [after]
Includes only comments were added after the provided UNIX timestamp.
@apiSuccess {Number} id
Id of the comment `replies` is the list of replies of. `null` for the list of top-level comments.
@apiSuccess {Number} total_replies
The number of replies if the `limit` parameter was not set. If `after` is set to `X`, this is the number of comments that were created after `X`. So setting `after` may change this value!
@apiSuccess {Number} hidden_replies
The number of comments that were omitted from the results because of the `limit` request parameter. Usually, this will be `total_replies` - `limit`.
@apiSuccess {Object[]} replies
The list of comments. Each comment also has the `total_replies`, `replies`, `id` and `hidden_replies` properties to represent nested comments.
@apiSuccess {Object[]} config
Object holding only the client configuration parameters that depend on server settings. Will be dropped in a future version of Isso. Use the dedicated `/config` endpoint instead.
@apiExample {curl} Get 2 comments with 5 responses:
curl 'https://comments.example.com/?uri=/thread/&limit=2&nested_limit=5'
@apiSuccessExample {json} Example response:
{
"total_replies": 14,
"replies": [
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.732863,
"text": "<p>Hello, World!</p>",
"total_replies": 1,
"hidden_replies": 0,
"dislikes": 2,
"modified": null,
"mode": 1,
"replies": [
{
"website": null,
"author": null,
"parent": 1,
"created": 1464818460.769638,
"text": "<p>Hi, now some Markdown: <em>Italic</em>, <strong>bold</strong>, <code>monospace</code>.</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "2af4e1a6c96a",
"id": 2,
"likes": 2
}
],
"hash": "1cb6cc0309a2",
"id": 1,
"likes": 2
},
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.80574,
"text": "<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Accusantium at commodi cum deserunt dolore, error fugiat harum incidunt, ipsa ipsum mollitia nam provident rerum sapiente suscipit tempora vitae? Est, qui?</p>",
"total_replies": 0,
"hidden_replies": 0,
"dislikes": 0,
"modified": null,
"mode": 1,
"replies": [],
"hash": "1cb6cc0309a2",
"id": 3,
"likes": 0
},
"id": null,
"hidden_replies": 12
}
"""
@requires(str, 'uri')
def fetch(self, environ, request, uri):
args = {
'uri': uri,
'after': request.args.get('after', 0)
}
try:
args['limit'] = int(request.args.get('limit'))
except TypeError:
args['limit'] = None
except ValueError:
return BadRequest("limit should be integer")
if request.args.get('parent') is not None:
try:
args['parent'] = int(request.args.get('parent'))
root_id = args['parent']
except ValueError:
return BadRequest("parent should be integer")
else:
args['parent'] = None
root_id = None
plain = request.args.get('plain', '0') == '0'
reply_counts = self.comments.reply_count(uri, after=args['after'])
if args['limit'] == 0:
root_list = []
else:
root_list = list(self.comments.fetch(**args))
if root_id not in reply_counts:
reply_counts[root_id] = 0
try:
nested_limit = int(request.args.get('nested_limit'))
except TypeError:
nested_limit = None
except ValueError:
return BadRequest("nested_limit should be integer")
rv = {
'id': root_id,
'total_replies': reply_counts[root_id],
'hidden_replies': reply_counts[root_id] - len(root_list),
'replies': self._process_fetched_list(root_list, plain),
'config': self.public_conf
}
# We are only checking for one level deep comments
if root_id is None:
for comment in rv['replies']:
if comment['id'] in reply_counts:
comment['total_replies'] = reply_counts[comment['id']]
if nested_limit is not None:
if nested_limit > 0:
args['parent'] = comment['id']
args['limit'] = nested_limit
replies = list(self.comments.fetch(**args))
else:
replies = []
else:
args['parent'] = comment['id']
replies = list(self.comments.fetch(**args))
else:
comment['total_replies'] = 0
replies = []
comment['hidden_replies'] = comment['total_replies'] - \
len(replies)
comment['replies'] = self._process_fetched_list(replies, plain)
return JSON(rv, 200)
def _add_gravatar_image(self, item):
if not self.conf.getboolean('gravatar'):
return item
email = item['email'] or item['author'] or ''
email_md5_hash = md5(email)
gravatar_url = self.conf.get('gravatar-url')
item['gravatar_image'] = gravatar_url.format(email_md5_hash)
return item
def _process_fetched_list(self, fetched_list, plain=False):
for item in fetched_list:
key = item['email'] or item['remote_addr']
val = self.cache.get('hash', key.encode('utf-8'))
if val is None:
val = self.hash(key)
self.cache.set('hash', key.encode('utf-8'), val)
item['hash'] = val
item = self._add_gravatar_image(item)
for key in set(item.keys()) - API.FIELDS:
item.pop(key)
if plain:
for item in fetched_list:
item['text'] = self.isso.render(item['text'])
return fetched_list
"""
@apiDefine likeResponse
@apiSuccess {Number} likes
The (new) number of likes on the comment.
@apiSuccess {Number} dislikes
The (new) number of dislikes on the comment.
@apiSuccessExample Return updated vote counts:
{
"likes": 4,
"dislikes": 3
}
"""
"""
@api {post} /id/:id/like like
@apiGroup Comment
@apiName like
@apiVersion 0.12.6
@apiDescription
Puts a “like” on a comment. The author of a comment cannot like their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to like.
@apiExample {curl} Like comment with id 23:
curl -X POST 'https://comments.example.com/id/23/like'
@apiUse likeResponse
"""
@xhr
def like(self, environ, request, id):
nv = self.comments.vote(True, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /id/:id/dislike dislike
@apiGroup Comment
@apiName dislike
@apiVersion 0.12.6
@apiDescription
Puts a “dislike” on a comment. The author of a comment cannot dislike their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to dislike.
@apiExample {curl} Dislike comment with id 23:
curl -X POST 'https://comments.example.com/id/23/dislike'
@apiUse likeResponse
"""
@xhr
def dislike(self, environ, request, id):
nv = self.comments.vote(False, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /preview preview
@apiGroup Comment
@apiName preview
@apiVersion 0.12.6
@apiDescription
Render comment text using markdown.
@apiBody {String{3...65535}} text
(Raw) comment text
@apiSuccess {String} text
Rendered comment text
@apiExample {curl} Preview comment:
curl -X POST 'https://comments.example.com/preview' -d '{"text": "A sample comment"}'
@apiSuccessExample {json} Rendered comment:
{
"text": "<p>A sample comment</p>"
}
"""
def preview(self, environment, request):
data = request.json
if data.get("text", None) is None:
raise BadRequest("no text given")
return JSON({'text': self.isso.render(data["text"])}, 200)
"""
@api {post} /count Count comments
@apiGroup Thread
@apiName counts
@apiVersion 0.12.6
@apiDescription
Counts the number of comments on multiple threads. The requestor provides a list of thread uris. The number of comments on each thread is returned as a list, in the same order as the threads were requested. The counts include comments that are responses to comments, but only published comments (i.e. exclusing comments pending moderation).
@apiBody {Number[]} urls
Array of URLs for which to fetch comment counts
@apiExample {curl} Get the respective counts of 5 threads:
curl -X POST 'https://comments.example.com/count' -d '["/blog/firstPost.html", "/blog/controversalPost.html", "/blog/howToCode.html", "/blog/boringPost.html", "/blog/isso.html"]
@apiSuccessExample {json} Counts of 5 threads:
[2, 18, 4, 0, 3]
"""
def counts(self, environ, request):
data = request.json
if not isinstance(data, list) and not all(isinstance(x, str) for x in data):
raise BadRequest("JSON must be a list of URLs")
return JSON(self.comments.count(*data), 200)
"""
@api {get} /feed Atom feed for comments
@apiGroup Thread
@apiName feed
@apiVersion 0.12.6
@apiDescription
Provide an Atom feed for the given thread. Only available if `[rss] base` is set in server config. By default, up to 100 comments are returned.
@apiQuery {String} uri
The uri of the thread to display a feed for
@apiExample {curl} Get an Atom feed for /thread/foo in XML format:
curl 'https://comments.example.com/feed?uri=/thread/foo'
@apiSuccessExample Atom feed for /thread/foo:
<?xml version='1.0' encoding='utf-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:thr="http://purl.org/syndication/thread/1.0">
<updated>2022-05-24T20:38:04.032789Z</updated>
<id>tag:example.com,2018:/isso/thread/thread/foo</id>
<title>Comments for example.com/thread/foo</title>
<entry>
<id>tag:example.com,2018:/isso/1/2</id>
<title>Comment #2</title>
<updated>2022-05-24T20:38:04.032789Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-2" />
<content type="html"><p>And another</p></content>
</entry>
<entry>
<id>tag:example.com,2018:/isso/1/1</id>
<title>Comment #1</title>
<updated>2022-05-24T20:38:00.837703Z</updated>
<author>
<name>Jane Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-1" />
<content type="html"><p>A sample comment</p></content>
</entry>
</feed>
"""
@requires(str, 'uri')
def feed(self, environ, request, uri):
conf = self.isso.conf.section("rss")
if not conf.get('base'):
raise NotFound
args = {
'uri': uri,
'order_by': 'id',
'asc': 0,
'limit': conf.getint('limit')
}
try:
args['limit'] = max(int(request.args.get('limit')), args['limit'])
except TypeError:
pass
except ValueError:
return BadRequest("limit should be integer")
comments = self.comments.fetch(**args)
base = conf.get('base').rstrip('/')
hostname = urlparse(base).netloc
# Let's build an Atom feed.
# RFC 4287: https://tools.ietf.org/html/rfc4287
# RFC 4685: https://tools.ietf.org/html/rfc4685 (threading extensions)
# For IDs: http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id
feed = ET.Element('feed', {
'xmlns': 'http://www.w3.org/2005/Atom',
'xmlns:thr': 'http://purl.org/syndication/thread/1.0'
})
# For feed ID, we would use thread ID, but we may not have
# one. Therefore, we use the URI. We don't have a year
# either...
id = ET.SubElement(feed, 'id')
id.text = 'tag:{hostname},2018:/isso/thread{uri}'.format(
hostname=hostname, uri=uri)
# For title, we don't have much either. Be pretty generic.
title = ET.SubElement(feed, 'title')
title.text = 'Comments for {hostname}{uri}'.format(
hostname=hostname, uri=uri)
comment0 = None
for comment in comments:
if comment0 is None:
comment0 = comment
entry = ET.SubElement(feed, 'entry')
# We don't use a real date in ID either to help with
# threading.
id = ET.SubElement(entry, 'id')
id.text = 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['id'])
title = ET.SubElement(entry, 'title')
title.text = 'Comment #{}'.format(comment['id'])
updated = ET.SubElement(entry, 'updated')
updated.text = '{}Z'.format(datetime.fromtimestamp(
comment['modified'] or comment['created']).isoformat())
author = ET.SubElement(entry, 'author')
name = ET.SubElement(author, 'name')
name.text = comment['author']
ET.SubElement(entry, 'link', {
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['id'])
})
content = ET.SubElement(entry, 'content', {
'type': 'html',
})
content.text = self.isso.render(comment['text'])
if comment['parent']:
ET.SubElement(entry, 'thr:in-reply-to', {
'ref': 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['parent']),
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['parent'])
})
# Updated is mandatory. If we have comments, we use the date
# of last modification of the first one (which is the last
# one). Otherwise, we use a fixed date.
updated = ET.Element('updated')
if comment0 is None:
updated.text = '1970-01-01T01:00:00Z'
else:
updated.text = datetime.fromtimestamp(
comment0['modified'] or comment0['created']).isoformat()
updated.text += 'Z'
feed.insert(0, updated)
output = StringIO()
ET.ElementTree(feed).write(output,
encoding='utf-8',
xml_declaration=True)
response = XML(output.getvalue(), 200)
# Add an etag/last-modified value for caching purpose
if comment0 is None:
response.set_etag('empty')
response.last_modified = 0
else:
response.set_etag('{tid}-{id}'.format(**comment0))
response.last_modified = comment0['modified'] or comment0['created']
return response.make_conditional(request)
"""
@api {get} /config Fetch client config
@apiGroup Thread
@apiName config
@apiVersion 0.13.0
@apiDescription
Returns only the client configuration parameters that depend on server settings.
@apiSuccess {Object[]} config
The client configuration object.
@apiSuccess {Boolean} config.reply-to-self
Commenters can reply to their own comments.
@apiSuccess {Boolean} config.require-author
Commenters must enter valid Name.
@apiSuccess {Boolean} config.require-email
Commenters must enter valid email.
@apiSuccess {Boolean} config.reply-notifications
Enable reply notifications via E-mail.
@apiSuccess {Boolean} config.gravatar
Load images from Gravatar service instead of generating them. Also disables regular avatars (see below).
@apiSuccess {Boolean} config.avatar
To avoid having both regular avatars and Gravatars side-by-side,
setting `gravatar` will disable regular avatars. The `avatar` key will
only be sent by the server if `gravatar` is set.
@apiSuccess {Boolean} config.feed
Enable or disable the addition of a link to the feed for the comment
thread.
@apiExample {curl} get the client config:
curl 'https://comments.example.com/config'
@apiSuccessExample {json} Client config:
{
"config": {
"reply-to-self": false,
"require-email": false,
"require-author": false,
"reply-notifications": false,
"gravatar": true,
"avatar": false,
"feed": false
}
}
"""
def config(self, environment, request):
rv = {'config': self.public_conf}
return JSON(rv, 200)
"""
@api {get} /demo/ Isso demo page
@apiGroup Demo
@apiName demo
@apiVersion 0.13.0
@apiPrivate
@apiDescription
Displays a demonstration of Isso with a thread counter and comment widget.
@apiExample {curl} Get demo page
curl 'https://comments.example.com/demo/'
@apiSuccessExample {html} Demo page:
<!DOCTYPE html>
<head>
<title>Isso Demo</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div id="page">
<div id="wrapper" style="max-width: 900px; margin-left: auto; margin-right: auto;">
<h2><a href=".">Isso Demo</a></h2>
<script src="../js/embed.dev.js" data-isso="../" ></script>
<section>
<p>This is a link to a thead, which will display a comment counter:
<a href=".#isso-thread">How many Comments?</a></p>
<p>Below is the actual comment field.</p>
</section>
<section id="isso-thread" data-title="Isso Test"><noscript>Javascript needs to be activated to view comments.</noscript></section>
</div>
</div>
</body>
"""
def demo(self, env, req):
index = pkg_resources.resource_filename('isso', 'demo/index.html')
return send_from_directory(os_path.dirname(index), 'index.html', env)
"""
@api {post} /login/ Log in
@apiGroup Admin
@apiName login
@apiVersion 0.12.6
@apiPrivate
@apiDescription
Log in to admin, will redirect to `/admin/` on success. Must use form data, not `POST` JSON.
@apiBody {String} password
The admin password as set in `[admin] password` in the server config.
@apiExample {curl} Log in
curl -X POST 'https://comments.example.com/login' -F "password=strong_default_password_for_isso_admin" -c cookie.txt
@apiSuccessExample {html} Login successful:
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="https://comments.example.com/admin/">https://comments.example.com/admin/</a>. If not, click the link.
"""
def login(self, env, req):
if not self.isso.conf.getboolean("admin", "enabled"):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('disabled.html', isso_host_script=isso_host_script)
data = req.form
password = self.isso.conf.get("admin", "password")
if data['password'] and data['password'] == password:
response = redirect(re.sub(
r'/login/$',
'/admin/',
get_current_url(env, strip_querystring=True)
))
cookie = self.create_cookie(value=self.isso.sign({"logged": True}),
expires=datetime.now() + timedelta(1))
response.headers.add("Set-Cookie", cookie("admin-session"))
response.headers.add("X-Set-Cookie", cookie("isso-admin-session"))
return response
else:
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('login.html', isso_host_script=isso_host_script)
"""
@api {get} /admin/ Admin interface
@apiGroup Admin
@apiName admin
@apiVersion 0.12.6
@apiPrivate
@apiPermission admin
@apiDescription
Display an admin interface from which to manage comments. Will redirect to `/login` if not already logged in.
@apiQuery {Number} [page=0]
Page number
@apiQuery {Number{1,2,4}} [mode=2]
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiQuery {String{id,created,modified,likes,dislikes,tid}} [order_by=created]
Comment ordering
@apiQuery {Number{0,1}} [asc=0]
Ascending
@apiExample {curl} Listing of published comments:
curl 'https://comments.example.com/admin/?mode=1&page=0&order_by=modified&asc=1' -b cookie.txt
"""
def admin(self, env, req):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
if not self.isso.conf.getboolean("admin", "enabled"):
return render_template('disabled.html', isso_host_script=isso_host_script)
try:
data = self.isso.unsign(req.cookies.get('admin-session', ''),
max_age=60 * 60 * 24)
except BadSignature:
return render_template('login.html', isso_host_script=isso_host_script)
if not data or not data['logged']:
return render_template('login.html', isso_host_script=isso_host_script)
page_size = 100
page = int(req.args.get('page', 0))
order_by = req.args.get('order_by', 'created')
asc = int(req.args.get('asc', 0))
mode = int(req.args.get('mode', 2))
comments = self.comments.fetchall(mode=mode, page=page,
limit=page_size,
order_by=order_by,
asc=asc)
comments_enriched = []
for comment in list(comments):
comment['hash'] = self.isso.sign(comment['id'])
comments_enriched.append(comment)
comment_mode_count = self.comments.count_modes()
max_page = int(sum(comment_mode_count.values()) / 100)
return render_template('admin.html', comments=comments_enriched,
page=int(page), mode=int(mode),
conf=self.conf, max_page=max_page,
counts=comment_mode_count,
order_by=order_by, asc=asc,
isso_host_script=isso_host_script)
"""
@api {get} /latest latest
@apiGroup Comment
@apiName latest
@apiVersion 0.12.6
@apiDescription
Get the latest comments from the system, no matter which thread. Only available if `[general] latest-enabled` is set to `true` in server config.
@apiQuery {Number} limit
The quantity of last comments to retrieve
@apiExample {curl} Get the latest 5 comments
curl 'https://comments.example.com/latest?limit=5'
@apiUse commentResponse
@apiSuccessExample Example result:
[
{
"website": null,
"uri": "/some",
"author": null,
"parent": null,
"created": 1464912312.123416,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 3,
"likes": 1
},
{
"website": null,
"uri": "/other",
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 0
}
]
"""
def latest(self, environ, request):
# if the feature is not allowed, don't present the endpoint
if not self.conf.getboolean("latest-enabled"):
return NotFound(
"Unavailable because 'latest-enabled' not set by site admin"
)
# get and check the limit
bad_limit_msg = "Query parameter 'limit' is mandatory (integer, >0)"
try:
limit = int(request.args['limit'])
except (KeyError, ValueError):
return BadRequest(bad_limit_msg)
if limit <= 0:
return BadRequest(bad_limit_msg)
# retrieve the latest N comments from the DB
all_comments_gen = self.comments.fetchall(limit=None, order_by='created', mode='1')
comments = collections.deque(all_comments_gen, maxlen=limit)
# prepare a special set of fields (except text which is rendered specifically)
fields = {
'author',
'created',
'dislikes',
'id',
'likes',
'mode',
'modified',
'parent',
'text',
'uri',
'website',
}
# process the retrieved comments and build results
result = []
for comment in comments:
processed = {key: comment[key] for key in fields}
processed['text'] = self.isso.render(comment['text'])
result.append(processed)
return JSON(result, 200)
| # -*- encoding: utf-8 -*-
import collections
import re
import time
import functools
import json # json.dumps to put URL in <script>
import pkg_resources
from configparser import NoOptionError
from datetime import datetime, timedelta
from html import escape
from io import BytesIO as StringIO
from os import path as os_path
from urllib.parse import unquote, urlparse
from xml.etree import ElementTree as ET
from itsdangerous import SignatureExpired, BadSignature
from werkzeug.exceptions import BadRequest, Forbidden, NotFound
from werkzeug.http import dump_cookie
from werkzeug.routing import Rule
from werkzeug.utils import redirect, send_from_directory
from werkzeug.wrappers import Response
from werkzeug.wsgi import get_current_url
from isso import utils, local
from isso.utils import (http, parse,
JSONResponse as JSON, XMLResponse as XML,
render_template)
from isso.utils.hash import md5, sha1
from isso.views import requires
# from Django apparently, looks good to me *duck*
__url_re = re.compile(
r'^'
r'(https?://)?'
# domain...
r'(?:(?:[\w](?:[\w-]{0,61}[\w])?\.)+(?:[\w]{2,6}\.?|[\w-]{2,}\.?)|'
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)'
r'$', re.IGNORECASE | re.UNICODE)
def isurl(text):
return __url_re.match(text) is not None
def normalize(url):
if not url.startswith(("http://", "https://")):
return "http://" + url
return url
def xhr(func):
"""A decorator to check for CSRF on POST/PUT/DELETE using a <form>
element and JS to execute automatically (see #40 for a proof-of-concept).
When an attacker uses a <form> to downvote a comment, the browser *should*
add a `Content-Type: ...` header with three possible values:
* application/x-www-form-urlencoded
* multipart/form-data
* text/plain
If the header is not sent or requests `application/json`, the request is
not forged (XHR is restricted by CORS separately).
"""
"""
@apiDefine csrf
@apiHeader {String="application/json"} Content-Type
The content type must be set to `application/json` to prevent CSRF attacks.
"""
def dec(self, env, req, *args, **kwargs):
if req.content_type and not req.content_type.startswith("application/json"):
raise Forbidden("CSRF")
return func(self, env, req, *args, **kwargs)
return dec
class API(object):
FIELDS = set(['id', 'parent', 'text', 'author', 'website',
'mode', 'created', 'modified', 'likes', 'dislikes', 'hash', 'gravatar_image', 'notification'])
# comment fields, that can be submitted
ACCEPT = set(['text', 'author', 'website', 'email', 'parent', 'title', 'notification'])
VIEWS = [
('fetch', ('GET', '/')),
('new', ('POST', '/new')),
('counts', ('POST', '/count')),
('feed', ('GET', '/feed')),
('latest', ('GET', '/latest')),
('view', ('GET', '/id/<int:id>')),
('edit', ('PUT', '/id/<int:id>')),
('delete', ('DELETE', '/id/<int:id>')),
('unsubscribe', ('GET', '/id/<int:id>/unsubscribe/<string:email>/<string:key>')),
('moderate', ('GET', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('moderate', ('POST', '/id/<int:id>/<any(edit,activate,delete):action>/<string:key>')),
('like', ('POST', '/id/<int:id>/like')),
('dislike', ('POST', '/id/<int:id>/dislike')),
('demo', ('GET', '/demo/')),
('preview', ('POST', '/preview')),
('config', ('GET', '/config')),
('login', ('POST', '/login/')),
('admin', ('GET', '/admin/'))
]
def __init__(self, isso, hasher):
self.isso = isso
self.hash = hasher.uhash
self.cache = isso.cache
self.signal = isso.signal
self.conf = isso.conf.section("general")
self.moderated = isso.conf.getboolean("moderation", "enabled")
# this is similar to the wordpress setting "Comment author must have a previously approved comment"
try:
self.approve_if_email_previously_approved = isso.conf.getboolean("moderation", "approve-if-email-previously-approved")
except NoOptionError:
self.approve_if_email_previously_approved = False
try:
self.trusted_proxies = list(isso.conf.getiter("server", "trusted-proxies"))
except NoOptionError:
self.trusted_proxies = []
# These configuration records can be read out by client
self.public_conf = {}
self.public_conf["reply-to-self"] = isso.conf.getboolean("guard", "reply-to-self")
self.public_conf["require-email"] = isso.conf.getboolean("guard", "require-email")
self.public_conf["require-author"] = isso.conf.getboolean("guard", "require-author")
self.public_conf["reply-notifications"] = isso.conf.getboolean("general", "reply-notifications")
self.public_conf["gravatar"] = isso.conf.getboolean("general", "gravatar")
if self.public_conf["gravatar"]:
self.public_conf["avatar"] = False
self.public_conf["feed"] = False
rss = isso.conf.section("rss")
if rss and rss.get('base'):
self.public_conf["feed"] = True
self.guard = isso.db.guard
self.threads = isso.db.threads
self.comments = isso.db.comments
for (view, (method, path)) in self.VIEWS:
isso.urls.add(
Rule(path, methods=[method], endpoint=getattr(self, view)))
@classmethod
def verify(cls, comment):
if comment.get("text") is None:
return False, "text is missing"
if not isinstance(comment.get("parent"), (int, type(None))):
return False, "parent must be an integer or null"
for key in ("text", "author", "website", "email"):
if not isinstance(comment.get(key), (str, type(None))):
return False, "%s must be a string or null" % key
if len(comment["text"].rstrip()) < 3:
return False, "text is too short (minimum length: 3)"
if len(comment["text"]) > 65535:
return False, "text is too long (maximum length: 65535)"
if len(comment.get("email") or "") > 254:
return False, "http://tools.ietf.org/html/rfc5321#section-4.5.3"
if comment.get("website"):
if len(comment["website"]) > 254:
return False, "arbitrary length limit"
if not isurl(comment["website"]):
return False, "Website not Django-conform"
return True, ""
# Common definitions for apidoc follow:
"""
@apiDefine plainParam
@apiQuery {Number=0,1} [plain=0]
If set to `1`, the plain text entered by the user will be returned in the comments’ `text` attribute (instead of the rendered markdown).
"""
"""
@apiDefine commentResponse
@apiSuccess {Number} id
The comment’s id (assigned by the server).
@apiSuccess {Number} parent
Id of the comment this comment is a reply to. `null` if this is a top-level-comment.
@apiSuccess {Number=1,2,4} mode
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiSuccess {String} author
The comments’s author’s name or `null`.
@apiSuccess {String} website
The comment’s author’s website or `null`.
@apiSuccess {String} hash
A hash uniquely identifying the comment’s author.
@apiSuccess {Number} created
UNIX timestamp of the time the comment was created (on the server).
@apiSuccess {Number} modified
UNIX timestamp of the time the comment was last modified (on the server). `null` if the comment was not yet modified.
"""
"""
@apiDefine admin Admin access needed
Only available to a logged-in site admin. Requires a valid `admin-session` cookie.
"""
"""
@api {post} /new create new
@apiGroup Comment
@apiName new
@apiVersion 0.12.6
@apiDescription
Creates a new comment. The server issues a cookie per new comment which acts as
an authentication token to modify or delete the comment.
The token is cryptographically signed and expires automatically after 900 seconds (=15min) by default.
@apiUse csrf
@apiQuery {String} uri
The uri of the thread to create the comment on.
@apiBody {String{3...65535}} text
The comment’s raw text.
@apiBody {String} [author]
The comment’s author’s name.
@apiBody {String{...254}} [email]
The comment’s author’s email address.
@apiBody {String{...254}} [website]
The comment’s author’s website’s url. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiBody {Number} [parent]
The parent comment’s id if the new comment is a response to an existing comment.
@apiExample {curl} Create a reply to comment with id 15:
curl 'https://comments.example.com/new?uri=/thread/' -d '{"text": "Stop saying that! *isso*!", "author": "Max Rant", "email": "[email protected]", "parent": 15}' -H 'Content-Type: application/json' -c cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Success after the above request:
HTTP/1.1 201 CREATED
Set-Cookie: 1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
X-Set-Cookie: isso-1=...; Expires=Wed, 18-Dec-2013 12:57:20 GMT; Max-Age=900; Path=/; SameSite=Lax
{
"website": null,
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>Stop saying that! <em>isso</em>!</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "e644f6ee43c0",
"id": 23,
"likes": 0
}
"""
@xhr
@requires(str, 'uri')
def new(self, environ, request, uri):
data = request.json
for field in set(data.keys()) - API.ACCEPT:
data.pop(field)
for key in ("author", "email", "website", "parent"):
data.setdefault(key, None)
valid, reason = API.verify(data)
if not valid:
return BadRequest(reason)
for field in ("author", "email", "website"):
if data.get(field) is not None:
data[field] = escape(data[field], quote=False)
if data.get("website"):
data["website"] = normalize(data["website"])
data['mode'] = 2 if self.moderated else 1
data['remote_addr'] = self._remote_addr(request)
with self.isso.lock:
if uri not in self.threads:
if not data.get('title'):
with http.curl('GET', local("origin"), uri) as resp:
if resp and resp.status == 200:
uri, title = parse.thread(resp.read(), id=uri)
else:
return NotFound('URI does not exist %s')
else:
title = data['title']
thread = self.threads.new(uri, title)
self.signal("comments.new:new-thread", thread)
else:
thread = self.threads[uri]
# notify extensions that the new comment is about to save
self.signal("comments.new:before-save", thread, data)
valid, reason = self.guard.validate(uri, data)
if not valid:
self.signal("comments.new:guard", reason)
raise Forbidden(reason)
with self.isso.lock:
# if email-based auto-moderation enabled, check for previously approved author
# right before approval.
if self.approve_if_email_previously_approved and self.comments.is_previously_approved_author(data['email']):
data['mode'] = 1
rv = self.comments.add(uri, data)
# notify extension, that the new comment has been successfully saved
self.signal("comments.new:after-save", thread, rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
rv["hash"] = self.hash(rv['email'] or rv['remote_addr'])
self.cache.set(
'hash', (rv['email'] or rv['remote_addr']).encode('utf-8'), rv['hash'])
rv = self._add_gravatar_image(rv)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
# success!
self.signal("comments.new:finish", thread, rv)
resp = JSON(rv, 202 if rv["mode"] == 2 else 201)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
def _remote_addr(self, request):
"""Return the anonymized IP address of the requester.
Takes into consideration a potential X-Forwarded-For HTTP header
if a necessary server.trusted-proxies configuration entry is set.
Recipe source: https://stackoverflow.com/a/22936947/636849
"""
remote_addr = request.remote_addr
if self.trusted_proxies:
route = request.access_route + [remote_addr]
remote_addr = next((addr for addr in reversed(route)
if addr not in self.trusted_proxies), remote_addr)
return utils.anonymize(str(remote_addr))
def create_cookie(self, **kwargs):
"""
Setting cookies to SameSite=None requires "Secure" attribute.
For http-only, we need to override the dump_cookie() default SameSite=None
or the cookie will be rejected.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite#samesitenone_requires_secure
"""
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
samesite = self.isso.conf.get("server", "samesite")
if isso_host_script.startswith("https://"):
secure = True
samesite = samesite or "None"
else:
secure = False
samesite = samesite or "Lax"
return functools.partial(dump_cookie, **kwargs,
secure=secure, samesite=samesite)
"""
@api {get} /id/:id view
@apiGroup Comment
@apiName view
@apiVersion 0.12.6
@apiDescription
View an existing comment, for the purpose of editing. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
@apiParam {Number} id
The id of the comment to view.
@apiUse plainParam
@apiExample {curl} View the comment with id 4:
curl 'https://comments.example.com/id/4' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample Example result:
{
"website": null,
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 1
}
"""
def view(self, environ, request, id):
rv = self.comments.get(id)
if rv is None:
raise NotFound
try:
self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
if request.args.get('plain', '0') == '0':
rv['text'] = self.isso.render(rv['text'])
return JSON(rv, 200)
"""
@api {put} /id/:id edit
@apiGroup Comment
@apiName edit
@apiVersion 0.12.6
@apiDescription
Edit an existing comment. Editing a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details. Editing a comment will set a new edit cookie in the response.
@apiUse csrf
@apiParam {Number} id
The id of the comment to edit.
@apiBody {String{3...65535}} text
A new (raw) text for the comment.
@apiBody {String} [author]
The modified comment’s author’s name.
@apiBody {String{...254}} [website]
The modified comment’s author’s website. Must be Django-conform, i.e. either `http(s)://example.com/foo` or `example.com/`
@apiExample {curl} Edit comment with id 23:
curl -X PUT 'https://comments.example.com/id/23' -d {"text": "I see your point. However, I still disagree.", "website": "maxrant.important.com"} -H 'Content-Type: application/json' -b cookie.txt
@apiUse commentResponse
@apiSuccessExample {json} Example response:
HTTP/1.1 200 OK
{
"website": "maxrant.important.com",
"author": "Max Rant",
"parent": 15,
"created": 1464940838.254393,
"text": "<p>I see your point. However, I still disagree.</p>",
"dislikes": 0,
"modified": 1464943439.073961,
"mode": 1,
"id": 23,
"likes": 0
}
"""
@xhr
def edit(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ''))
except (SignatureExpired, BadSignature):
raise Forbidden
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
data = request.json
if data.get("text") is None or len(data["text"]) < 3:
raise BadRequest("no text given")
for key in set(data.keys()) - set(["text", "author", "website"]):
data.pop(key)
data['modified'] = time.time()
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
cookie = self.create_cookie(
value=self.isso.sign([rv["id"], sha1(rv["text"])]),
max_age=self.conf.getint('max-age'))
rv["text"] = self.isso.render(rv["text"])
resp = JSON(rv, 200)
resp.headers.add("Set-Cookie", cookie(str(rv["id"])))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % rv["id"]))
return resp
"""
@api {delete} /id/:id delete
@apiGroup Comment
@apiName delete
@apiVersion 0.12.6
@apiDescription
Delete an existing comment. Deleting a comment is only possible for a short period of time (15min by default) after it was created and only if the requestor has a valid cookie for it. See the [Isso server documentation](https://isso-comments.de/docs/reference/server-config/) for details.
Returns either `null` or a comment with an empty text value when the comment is still referenced by other comments.
@apiUse csrf
@apiParam {Number} id
Id of the comment to delete.
@apiExample {curl} Delete comment with id 14:
curl -X DELETE 'https://comments.example.com/id/14' -b cookie.txt
@apiSuccessExample Successful deletion returns null and deletes cookie:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
null
@apiSuccessExample {json} Comment still referenced by another:
HTTP/1.1 200 OK
Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
X-Set-Cookie 14=; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/; SameSite=Lax
{
"id": 14,
"parent": null,
"created": 1653432621.0512516,
"modified": 1653434488.571937,
"mode": 4,
"text": "",
"author": null,
"website": null,
"likes": 0,
"dislikes": 0,
"notification": 0
}
"""
@xhr
def delete(self, environ, request, id):
try:
rv = self.isso.unsign(request.cookies.get(str(id), ""))
except (SignatureExpired, BadSignature):
raise Forbidden
else:
if rv[0] != id:
raise Forbidden
# verify checksum, mallory might skip cookie deletion when he deletes a comment
if rv[1] != sha1(self.comments.get(id)["text"]):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
with self.isso.lock:
rv = self.comments.delete(id)
if rv:
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.delete", id)
resp = JSON(rv, 200)
cookie = self.create_cookie(expires=0, max_age=0)
resp.headers.add("Set-Cookie", cookie(str(id)))
resp.headers.add("X-Set-Cookie", cookie("isso-%i" % id))
return resp
"""
@api {get} /id/:id/unsubscribe/:email/:key unsubscribe
@apiGroup Comment
@apiName unsubscribe
@apiVersion 0.12.6
@apiDescription
Opt out from getting any further email notifications about replies to a particular comment. In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by isso.
@apiParam {Number} id
The id of the comment to unsubscribe from replies to.
@apiParam {String} email
The email address of the subscriber.
@apiParam {String} key
The key to authenticate the subscriber.
@apiExample {curl} Unsubscribe Alice from replies to comment with id 13:
curl -X GET 'https://comments.example.com/id/13/unsubscribe/[email protected]/WyJ1bnN1YnNjcmliZSIsImFsaWNlQGV4YW1wbGUuY29tIl0.DdcH9w.Wxou-l22ySLFkKUs7RUHnoM8Kos'
@apiSuccessExample {html} Using GET:
<!DOCTYPE html>
<html>
<head>Successfully unsubscribed</head>
<body>
<p>You have been unsubscribed from replies in the given conversation.</p>
</body>
</html>
"""
def unsubscribe(self, environ, request, id, email, key):
email = unquote(email)
try:
rv = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
if not isinstance(rv, list) or len(rv) != 2:
raise Forbidden
if rv[0] != 'unsubscribe' or rv[1] != email:
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
with self.isso.lock:
self.comments.unsubscribe(email, id)
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
" <title>Successfully unsubscribed</title>"
"</head>"
"<body>"
" <p>You have been unsubscribed from replies in the given conversation.</p>"
"</body>"
"</html>")
return Response(modal, 200, content_type="text/html")
"""
@api {post} /id/:id/:action/:key moderate
@apiGroup Comment
@apiName moderate
@apiVersion 0.12.6
@apiDescription
Publish or delete a comment that is in the moderation queue (mode `2`). In order to use this endpoint, the requestor needs a `key` that is usually obtained from an email sent out by Isso or provided in the admin interface.
This endpoint can also be used with a `GET` request. In that case, a html page is returned that asks the user whether they are sure to perform the selected action. If they select “yes”, the query is repeated using `POST`.
@apiParam {Number} id
The id of the comment to moderate.
@apiParam {String=activate,edit,delete} action
- `activate` to publish the comment (change its mode to `1`).
- `edit`: Send `text`, `author`, `email` and `website` via `POST`.
To be used from the admin interface. Better use the `edit` `PUT` endpoint.
- `delete` to delete the comment.
@apiParam {String} key
The moderation key to authenticate the moderation.
@apiExample {curl} delete comment with id 13:
curl -X POST 'https://comments.example.com/id/13/delete/MTM.CjL6Fg.REIdVXa-whJS_x8ojQL4RrXnuF4'
@apiSuccessExample {html} Request deletion using GET:
<!DOCTYPE html>
<html>
<head>
<script>
if (confirm('Delete: Are you sure?')) {
xhr = new XMLHttpRequest;
xhr.open('POST', window.location.href);
xhr.send(null);
xhr.onload = function() {
window.location.href = "https://example.com/example-thread/#isso-13";
};
}
</script>
@apiSuccessExample Delete using POST:
Comment has been deleted
@apiSuccessExample Activate using POST:
Comment has been activated
"""
def moderate(self, environ, request, id, action, key):
try:
id = self.isso.unsign(key, max_age=2**32)
except (BadSignature, SignatureExpired):
raise Forbidden
item = self.comments.get(id)
if item is None:
raise NotFound
thread = self.threads.get(item['tid'])
link = local("origin") + thread["uri"] + "#isso-%i" % item["id"]
if request.method == "GET":
modal = (
"<!DOCTYPE html>"
"<html>"
"<head>"
"<script>"
" if (confirm('%s: Are you sure?')) {"
" xhr = new XMLHttpRequest;"
" xhr.open('POST', window.location.href);"
" xhr.send(null);"
" xhr.onload = function() {"
" window.location.href = %s;"
" };"
" }"
"</script>" % (action.capitalize(), json.dumps(link)))
return Response(modal, 200, content_type="text/html")
if action == "activate":
if item['mode'] == 1:
return Response("Already activated", 200)
with self.isso.lock:
self.comments.activate(id)
self.signal("comments.activate", thread, item)
return Response("Comment has been activated", 200)
elif action == "edit":
data = request.json
with self.isso.lock:
rv = self.comments.update(id, data)
for key in set(rv.keys()) - API.FIELDS:
rv.pop(key)
self.signal("comments.edit", rv)
return JSON(rv, 200)
else:
with self.isso.lock:
self.comments.delete(id)
self.cache.delete(
'hash', (item['email'] or item['remote_addr']).encode('utf-8'))
self.signal("comments.delete", id)
return Response("Comment has been deleted", 200)
"""
@api {get} / Get comments
@apiGroup Thread
@apiName fetch
@apiVersion 0.12.6
@apiDescription Queries the publicly visible comments of a thread.
@apiQuery {String} uri
The URI of thread to get the comments from.
@apiQuery {Number} [parent]
Return only comments that are children of the comment with the provided ID.
@apiUse plainParam
@apiQuery {Number} [limit]
The maximum number of returned top-level comments. Omit for unlimited results.
@apiQuery {Number} [nested_limit]
The maximum number of returned nested comments per comment. Omit for unlimited results.
@apiQuery {Number} [after]
Includes only comments were added after the provided UNIX timestamp.
@apiSuccess {Number} id
Id of the comment `replies` is the list of replies of. `null` for the list of top-level comments.
@apiSuccess {Number} total_replies
The number of replies if the `limit` parameter was not set. If `after` is set to `X`, this is the number of comments that were created after `X`. So setting `after` may change this value!
@apiSuccess {Number} hidden_replies
The number of comments that were omitted from the results because of the `limit` request parameter. Usually, this will be `total_replies` - `limit`.
@apiSuccess {Object[]} replies
The list of comments. Each comment also has the `total_replies`, `replies`, `id` and `hidden_replies` properties to represent nested comments.
@apiSuccess {Object[]} config
Object holding only the client configuration parameters that depend on server settings. Will be dropped in a future version of Isso. Use the dedicated `/config` endpoint instead.
@apiExample {curl} Get 2 comments with 5 responses:
curl 'https://comments.example.com/?uri=/thread/&limit=2&nested_limit=5'
@apiSuccessExample {json} Example response:
{
"total_replies": 14,
"replies": [
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.732863,
"text": "<p>Hello, World!</p>",
"total_replies": 1,
"hidden_replies": 0,
"dislikes": 2,
"modified": null,
"mode": 1,
"replies": [
{
"website": null,
"author": null,
"parent": 1,
"created": 1464818460.769638,
"text": "<p>Hi, now some Markdown: <em>Italic</em>, <strong>bold</strong>, <code>monospace</code>.</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"hash": "2af4e1a6c96a",
"id": 2,
"likes": 2
}
],
"hash": "1cb6cc0309a2",
"id": 1,
"likes": 2
},
{
"website": null,
"author": null,
"parent": null,
"created": 1464818460.80574,
"text": "<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Accusantium at commodi cum deserunt dolore, error fugiat harum incidunt, ipsa ipsum mollitia nam provident rerum sapiente suscipit tempora vitae? Est, qui?</p>",
"total_replies": 0,
"hidden_replies": 0,
"dislikes": 0,
"modified": null,
"mode": 1,
"replies": [],
"hash": "1cb6cc0309a2",
"id": 3,
"likes": 0
},
"id": null,
"hidden_replies": 12
}
"""
@requires(str, 'uri')
def fetch(self, environ, request, uri):
args = {
'uri': uri,
'after': request.args.get('after', 0)
}
try:
args['limit'] = int(request.args.get('limit'))
except TypeError:
args['limit'] = None
except ValueError:
return BadRequest("limit should be integer")
if request.args.get('parent') is not None:
try:
args['parent'] = int(request.args.get('parent'))
root_id = args['parent']
except ValueError:
return BadRequest("parent should be integer")
else:
args['parent'] = None
root_id = None
plain = request.args.get('plain', '0') == '0'
reply_counts = self.comments.reply_count(uri, after=args['after'])
if args['limit'] == 0:
root_list = []
else:
root_list = list(self.comments.fetch(**args))
if root_id not in reply_counts:
reply_counts[root_id] = 0
try:
nested_limit = int(request.args.get('nested_limit'))
except TypeError:
nested_limit = None
except ValueError:
return BadRequest("nested_limit should be integer")
rv = {
'id': root_id,
'total_replies': reply_counts[root_id],
'hidden_replies': reply_counts[root_id] - len(root_list),
'replies': self._process_fetched_list(root_list, plain),
'config': self.public_conf
}
# We are only checking for one level deep comments
if root_id is None:
for comment in rv['replies']:
if comment['id'] in reply_counts:
comment['total_replies'] = reply_counts[comment['id']]
if nested_limit is not None:
if nested_limit > 0:
args['parent'] = comment['id']
args['limit'] = nested_limit
replies = list(self.comments.fetch(**args))
else:
replies = []
else:
args['parent'] = comment['id']
replies = list(self.comments.fetch(**args))
else:
comment['total_replies'] = 0
replies = []
comment['hidden_replies'] = comment['total_replies'] - \
len(replies)
comment['replies'] = self._process_fetched_list(replies, plain)
return JSON(rv, 200)
def _add_gravatar_image(self, item):
if not self.conf.getboolean('gravatar'):
return item
email = item['email'] or item['author'] or ''
email_md5_hash = md5(email)
gravatar_url = self.conf.get('gravatar-url')
item['gravatar_image'] = gravatar_url.format(email_md5_hash)
return item
def _process_fetched_list(self, fetched_list, plain=False):
for item in fetched_list:
key = item['email'] or item['remote_addr']
val = self.cache.get('hash', key.encode('utf-8'))
if val is None:
val = self.hash(key)
self.cache.set('hash', key.encode('utf-8'), val)
item['hash'] = val
item = self._add_gravatar_image(item)
for key in set(item.keys()) - API.FIELDS:
item.pop(key)
if plain:
for item in fetched_list:
item['text'] = self.isso.render(item['text'])
return fetched_list
"""
@apiDefine likeResponse
@apiSuccess {Number} likes
The (new) number of likes on the comment.
@apiSuccess {Number} dislikes
The (new) number of dislikes on the comment.
@apiSuccessExample Return updated vote counts:
{
"likes": 4,
"dislikes": 3
}
"""
"""
@api {post} /id/:id/like like
@apiGroup Comment
@apiName like
@apiVersion 0.12.6
@apiDescription
Puts a “like” on a comment. The author of a comment cannot like their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to like.
@apiExample {curl} Like comment with id 23:
curl -X POST 'https://comments.example.com/id/23/like'
@apiUse likeResponse
"""
@xhr
def like(self, environ, request, id):
nv = self.comments.vote(True, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /id/:id/dislike dislike
@apiGroup Comment
@apiName dislike
@apiVersion 0.12.6
@apiDescription
Puts a “dislike” on a comment. The author of a comment cannot dislike their own comment.
@apiUse csrf
@apiParam {Number} id
The id of the comment to dislike.
@apiExample {curl} Dislike comment with id 23:
curl -X POST 'https://comments.example.com/id/23/dislike'
@apiUse likeResponse
"""
@xhr
def dislike(self, environ, request, id):
nv = self.comments.vote(False, id, self._remote_addr(request))
return JSON(nv, 200)
"""
@api {post} /preview preview
@apiGroup Comment
@apiName preview
@apiVersion 0.12.6
@apiDescription
Render comment text using markdown.
@apiBody {String{3...65535}} text
(Raw) comment text
@apiSuccess {String} text
Rendered comment text
@apiExample {curl} Preview comment:
curl -X POST 'https://comments.example.com/preview' -d '{"text": "A sample comment"}'
@apiSuccessExample {json} Rendered comment:
{
"text": "<p>A sample comment</p>"
}
"""
def preview(self, environment, request):
data = request.json
if data.get("text", None) is None:
raise BadRequest("no text given")
return JSON({'text': self.isso.render(data["text"])}, 200)
"""
@api {post} /count Count comments
@apiGroup Thread
@apiName counts
@apiVersion 0.12.6
@apiDescription
Counts the number of comments on multiple threads. The requestor provides a list of thread uris. The number of comments on each thread is returned as a list, in the same order as the threads were requested. The counts include comments that are responses to comments, but only published comments (i.e. exclusing comments pending moderation).
@apiBody {Number[]} urls
Array of URLs for which to fetch comment counts
@apiExample {curl} Get the respective counts of 5 threads:
curl -X POST 'https://comments.example.com/count' -d '["/blog/firstPost.html", "/blog/controversalPost.html", "/blog/howToCode.html", "/blog/boringPost.html", "/blog/isso.html"]
@apiSuccessExample {json} Counts of 5 threads:
[2, 18, 4, 0, 3]
"""
def counts(self, environ, request):
data = request.json
if not isinstance(data, list) and not all(isinstance(x, str) for x in data):
raise BadRequest("JSON must be a list of URLs")
return JSON(self.comments.count(*data), 200)
"""
@api {get} /feed Atom feed for comments
@apiGroup Thread
@apiName feed
@apiVersion 0.12.6
@apiDescription
Provide an Atom feed for the given thread. Only available if `[rss] base` is set in server config. By default, up to 100 comments are returned.
@apiQuery {String} uri
The uri of the thread to display a feed for
@apiExample {curl} Get an Atom feed for /thread/foo in XML format:
curl 'https://comments.example.com/feed?uri=/thread/foo'
@apiSuccessExample Atom feed for /thread/foo:
<?xml version='1.0' encoding='utf-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:thr="http://purl.org/syndication/thread/1.0">
<updated>2022-05-24T20:38:04.032789Z</updated>
<id>tag:example.com,2018:/isso/thread/thread/foo</id>
<title>Comments for example.com/thread/foo</title>
<entry>
<id>tag:example.com,2018:/isso/1/2</id>
<title>Comment #2</title>
<updated>2022-05-24T20:38:04.032789Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-2" />
<content type="html"><p>And another</p></content>
</entry>
<entry>
<id>tag:example.com,2018:/isso/1/1</id>
<title>Comment #1</title>
<updated>2022-05-24T20:38:00.837703Z</updated>
<author>
<name>Jane Doe</name>
</author>
<link href="http://example.com/thread/foo#isso-1" />
<content type="html"><p>A sample comment</p></content>
</entry>
</feed>
"""
@requires(str, 'uri')
def feed(self, environ, request, uri):
conf = self.isso.conf.section("rss")
if not conf.get('base'):
raise NotFound
args = {
'uri': uri,
'order_by': 'id',
'asc': 0,
'limit': conf.getint('limit')
}
try:
args['limit'] = max(int(request.args.get('limit')), args['limit'])
except TypeError:
pass
except ValueError:
return BadRequest("limit should be integer")
comments = self.comments.fetch(**args)
base = conf.get('base').rstrip('/')
hostname = urlparse(base).netloc
# Let's build an Atom feed.
# RFC 4287: https://tools.ietf.org/html/rfc4287
# RFC 4685: https://tools.ietf.org/html/rfc4685 (threading extensions)
# For IDs: http://web.archive.org/web/20110514113830/http://diveintomark.org/archives/2004/05/28/howto-atom-id
feed = ET.Element('feed', {
'xmlns': 'http://www.w3.org/2005/Atom',
'xmlns:thr': 'http://purl.org/syndication/thread/1.0'
})
# For feed ID, we would use thread ID, but we may not have
# one. Therefore, we use the URI. We don't have a year
# either...
id = ET.SubElement(feed, 'id')
id.text = 'tag:{hostname},2018:/isso/thread{uri}'.format(
hostname=hostname, uri=uri)
# For title, we don't have much either. Be pretty generic.
title = ET.SubElement(feed, 'title')
title.text = 'Comments for {hostname}{uri}'.format(
hostname=hostname, uri=uri)
comment0 = None
for comment in comments:
if comment0 is None:
comment0 = comment
entry = ET.SubElement(feed, 'entry')
# We don't use a real date in ID either to help with
# threading.
id = ET.SubElement(entry, 'id')
id.text = 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['id'])
title = ET.SubElement(entry, 'title')
title.text = 'Comment #{}'.format(comment['id'])
updated = ET.SubElement(entry, 'updated')
updated.text = '{}Z'.format(datetime.fromtimestamp(
comment['modified'] or comment['created']).isoformat())
author = ET.SubElement(entry, 'author')
name = ET.SubElement(author, 'name')
name.text = comment['author']
ET.SubElement(entry, 'link', {
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['id'])
})
content = ET.SubElement(entry, 'content', {
'type': 'html',
})
content.text = self.isso.render(comment['text'])
if comment['parent']:
ET.SubElement(entry, 'thr:in-reply-to', {
'ref': 'tag:{hostname},2018:/isso/{tid}/{id}'.format(
hostname=hostname,
tid=comment['tid'],
id=comment['parent']),
'href': '{base}{uri}#isso-{id}'.format(
base=base,
uri=uri, id=comment['parent'])
})
# Updated is mandatory. If we have comments, we use the date
# of last modification of the first one (which is the last
# one). Otherwise, we use a fixed date.
updated = ET.Element('updated')
if comment0 is None:
updated.text = '1970-01-01T01:00:00Z'
else:
updated.text = datetime.fromtimestamp(
comment0['modified'] or comment0['created']).isoformat()
updated.text += 'Z'
feed.insert(0, updated)
output = StringIO()
ET.ElementTree(feed).write(output,
encoding='utf-8',
xml_declaration=True)
response = XML(output.getvalue(), 200)
# Add an etag/last-modified value for caching purpose
if comment0 is None:
response.set_etag('empty')
response.last_modified = 0
else:
response.set_etag('{tid}-{id}'.format(**comment0))
response.last_modified = comment0['modified'] or comment0['created']
return response.make_conditional(request)
"""
@api {get} /config Fetch client config
@apiGroup Thread
@apiName config
@apiVersion 0.13.0
@apiDescription
Returns only the client configuration parameters that depend on server settings.
@apiSuccess {Object[]} config
The client configuration object.
@apiSuccess {Boolean} config.reply-to-self
Commenters can reply to their own comments.
@apiSuccess {Boolean} config.require-author
Commenters must enter valid Name.
@apiSuccess {Boolean} config.require-email
Commenters must enter valid email.
@apiSuccess {Boolean} config.reply-notifications
Enable reply notifications via E-mail.
@apiSuccess {Boolean} config.gravatar
Load images from Gravatar service instead of generating them. Also disables regular avatars (see below).
@apiSuccess {Boolean} config.avatar
To avoid having both regular avatars and Gravatars side-by-side,
setting `gravatar` will disable regular avatars. The `avatar` key will
only be sent by the server if `gravatar` is set.
@apiSuccess {Boolean} config.feed
Enable or disable the addition of a link to the feed for the comment
thread.
@apiExample {curl} get the client config:
curl 'https://comments.example.com/config'
@apiSuccessExample {json} Client config:
{
"config": {
"reply-to-self": false,
"require-email": false,
"require-author": false,
"reply-notifications": false,
"gravatar": true,
"avatar": false,
"feed": false
}
}
"""
def config(self, environment, request):
rv = {'config': self.public_conf}
return JSON(rv, 200)
"""
@api {get} /demo/ Isso demo page
@apiGroup Demo
@apiName demo
@apiVersion 0.13.0
@apiPrivate
@apiDescription
Displays a demonstration of Isso with a thread counter and comment widget.
@apiExample {curl} Get demo page
curl 'https://comments.example.com/demo/'
@apiSuccessExample {html} Demo page:
<!DOCTYPE html>
<head>
<title>Isso Demo</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div id="page">
<div id="wrapper" style="max-width: 900px; margin-left: auto; margin-right: auto;">
<h2><a href=".">Isso Demo</a></h2>
<script src="../js/embed.dev.js" data-isso="../" ></script>
<section>
<p>This is a link to a thead, which will display a comment counter:
<a href=".#isso-thread">How many Comments?</a></p>
<p>Below is the actual comment field.</p>
</section>
<section id="isso-thread" data-title="Isso Test"><noscript>Javascript needs to be activated to view comments.</noscript></section>
</div>
</div>
</body>
"""
def demo(self, env, req):
index = pkg_resources.resource_filename('isso', 'demo/index.html')
return send_from_directory(os_path.dirname(index), 'index.html', env)
"""
@api {post} /login/ Log in
@apiGroup Admin
@apiName login
@apiVersion 0.12.6
@apiPrivate
@apiDescription
Log in to admin, will redirect to `/admin/` on success. Must use form data, not `POST` JSON.
@apiBody {String} password
The admin password as set in `[admin] password` in the server config.
@apiExample {curl} Log in
curl -X POST 'https://comments.example.com/login' -F "password=strong_default_password_for_isso_admin" -c cookie.txt
@apiSuccessExample {html} Login successful:
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="https://comments.example.com/admin/">https://comments.example.com/admin/</a>. If not, click the link.
"""
def login(self, env, req):
if not self.isso.conf.getboolean("admin", "enabled"):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('disabled.html', isso_host_script=isso_host_script)
data = req.form
password = self.isso.conf.get("admin", "password")
if data['password'] and data['password'] == password:
response = redirect(re.sub(
r'/login/$',
'/admin/',
get_current_url(env, strip_querystring=True)
))
cookie = self.create_cookie(value=self.isso.sign({"logged": True}),
expires=datetime.now() + timedelta(1))
response.headers.add("Set-Cookie", cookie("admin-session"))
response.headers.add("X-Set-Cookie", cookie("isso-admin-session"))
return response
else:
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
return render_template('login.html', isso_host_script=isso_host_script)
"""
@api {get} /admin/ Admin interface
@apiGroup Admin
@apiName admin
@apiVersion 0.12.6
@apiPrivate
@apiPermission admin
@apiDescription
Display an admin interface from which to manage comments. Will redirect to `/login` if not already logged in.
@apiQuery {Number} [page=0]
Page number
@apiQuery {Number{1,2,4}} [mode=2]
The comment’s mode:
value | explanation
--- | ---
`1` | accepted: The comment was accepted by the server and is published.
`2` | in moderation queue: The comment was accepted by the server but awaits moderation.
`4` | deleted, but referenced: The comment was deleted on the server but is still referenced by replies.
@apiQuery {String{id,created,modified,likes,dislikes,tid}} [order_by=created]
Comment ordering
@apiQuery {Number{0,1}} [asc=0]
Ascending
@apiExample {curl} Listing of published comments:
curl 'https://comments.example.com/admin/?mode=1&page=0&order_by=modified&asc=1' -b cookie.txt
"""
def admin(self, env, req):
isso_host_script = self.isso.conf.get("server", "public-endpoint") or local.host
if not self.isso.conf.getboolean("admin", "enabled"):
return render_template('disabled.html', isso_host_script=isso_host_script)
try:
data = self.isso.unsign(req.cookies.get('admin-session', ''),
max_age=60 * 60 * 24)
except BadSignature:
return render_template('login.html', isso_host_script=isso_host_script)
if not data or not data['logged']:
return render_template('login.html', isso_host_script=isso_host_script)
page_size = 100
page = int(req.args.get('page', 0))
order_by = req.args.get('order_by', 'created')
asc = int(req.args.get('asc', 0))
mode = int(req.args.get('mode', 2))
comments = self.comments.fetchall(mode=mode, page=page,
limit=page_size,
order_by=order_by,
asc=asc)
comments_enriched = []
for comment in list(comments):
comment['hash'] = self.isso.sign(comment['id'])
comments_enriched.append(comment)
comment_mode_count = self.comments.count_modes()
max_page = int(sum(comment_mode_count.values()) / 100)
return render_template('admin.html', comments=comments_enriched,
page=int(page), mode=int(mode),
conf=self.conf, max_page=max_page,
counts=comment_mode_count,
order_by=order_by, asc=asc,
isso_host_script=isso_host_script)
"""
@api {get} /latest latest
@apiGroup Comment
@apiName latest
@apiVersion 0.12.6
@apiDescription
Get the latest comments from the system, no matter which thread. Only available if `[general] latest-enabled` is set to `true` in server config.
@apiQuery {Number} limit
The quantity of last comments to retrieve
@apiExample {curl} Get the latest 5 comments
curl 'https://comments.example.com/latest?limit=5'
@apiUse commentResponse
@apiSuccessExample Example result:
[
{
"website": null,
"uri": "/some",
"author": null,
"parent": null,
"created": 1464912312.123416,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 3,
"likes": 1
},
{
"website": null,
"uri": "/other",
"author": null,
"parent": null,
"created": 1464914341.312426,
"text": " <p>I want to use MySQL</p>",
"dislikes": 0,
"modified": null,
"mode": 1,
"id": 4,
"likes": 0
}
]
"""
def latest(self, environ, request):
# if the feature is not allowed, don't present the endpoint
if not self.conf.getboolean("latest-enabled"):
return NotFound(
"Unavailable because 'latest-enabled' not set by site admin"
)
# get and check the limit
bad_limit_msg = "Query parameter 'limit' is mandatory (integer, >0)"
try:
limit = int(request.args['limit'])
except (KeyError, ValueError):
return BadRequest(bad_limit_msg)
if limit <= 0:
return BadRequest(bad_limit_msg)
# retrieve the latest N comments from the DB
all_comments_gen = self.comments.fetchall(limit=None, order_by='created', mode='1')
comments = collections.deque(all_comments_gen, maxlen=limit)
# prepare a special set of fields (except text which is rendered specifically)
fields = {
'author',
'created',
'dislikes',
'id',
'likes',
'mode',
'modified',
'parent',
'text',
'uri',
'website',
}
# process the retrieved comments and build results
result = []
for comment in comments:
processed = {key: comment[key] for key in fields}
processed['text'] = self.isso.render(comment['text'])
result.append(processed)
return JSON(result, 200)
| schneidr | 9a0e1867e4ebe7e1ee7106584adb29b16880a955 | 73d9886100fd56cbceb38e2e00b84f52f0328a8c | I did add my test case in [test_comments.py](https://github.com/posativ/isso/blob/90019450483c601c1f3dee3de1e973a41679e4d9/isso/tests/test_comments.py#L172). | schneidr | 2 |
posativ/isso | 903 | migrate: Handle single newlines in WordPress comments as line breaks | WordPress renders a single newline in a comment as a <br> tag, but Isso renders a single newline in the comment as a single newline in the HTML. This is rendered the same as if it was a space, all text on one line.
To fix, detect single newlines when importing WordPress comments and convert to a line break in Markdown. Add a test for this also.
Example, this WordPress comment (as shown in CDATA of XML export):
> First line of comment.
> Second line of comment.
Renders in WordPress as:
> First line of comment.<br>Second line of comment.
But renders in Isso after import as if it was:
> First line of comment. Second line of comment.
After this commit is applied and comments re-imported, it renders as:
> First line of comment.
> Second line of comment.
<!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [x] (If adding features:) I have added tests to cover my changes
- (N/A) (If docs changes needed:) I have updated the **documentation** accordingly.
- (N/A but please tell me if you disagree) I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message --> | null | 2022-06-06 23:04:49+00:00 | 2022-06-12 10:46:19+00:00 | isso/migrate.py | # -*- encoding: utf-8 -*-
import functools
import io
import json
import logging
import os
import re
import sys
import textwrap
from collections import defaultdict
from time import mktime, strptime, time
from urllib.parse import urlparse
from xml.etree import ElementTree
from isso.utils import anonymize
logger = logging.getLogger("isso")
def strip(val):
if isinstance(val, (str, )):
return val.strip()
return val
class Progress(object):
def __init__(self, end):
self.end = end or 1
self.istty = sys.stdout.isatty()
self.last = 0
def update(self, i, message):
if not self.istty or message is None:
return
cols = int((os.popen('stty size', 'r').read()).split()[1])
message = message[:cols - 7]
if time() - self.last > 0.2:
sys.stdout.write("\r{0}".format(" " * cols))
sys.stdout.write("\r[{0:.0%}] {1}".format(i / self.end, message))
sys.stdout.flush()
self.last = time()
def finish(self, message):
self.last = 0
self.update(self.end, message + "\n")
class Disqus(object):
ns = '{http://disqus.com}'
internals = '{http://disqus.com/disqus-internals}'
def __init__(self, db, xmlfile, empty_id=False):
self.threads = set([])
self.comments = set([])
self.db = db
self.xmlfile = xmlfile
self.empty_id = empty_id
def insert(self, thread, posts):
path = urlparse(thread.find('%slink' % Disqus.ns).text).path
remap = dict()
if path not in self.db.threads:
thread_title = thread.find(Disqus.ns + 'title').text or ''
self.db.threads.new(path, thread_title.strip())
for item in sorted(posts, key=lambda k: k['created']):
dsq_id = item.pop('dsq:id')
item['parent'] = remap.get(item.pop('dsq:parent', None))
rv = self.db.comments.add(path, item)
remap[dsq_id] = rv["id"]
self.comments.update(set(remap.keys()))
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
res = defaultdict(list)
for post in tree.findall(Disqus.ns + 'post'):
email = post.find('{0}author/{0}email'.format(Disqus.ns))
ip = post.find(Disqus.ns + 'ipAddress')
item = {
'dsq:id': post.attrib.get(Disqus.internals + 'id'),
'text': post.find(Disqus.ns + 'message').text,
'author': post.find('{0}author/{0}name'.format(Disqus.ns)).text,
'email': email.text if email is not None else '',
'created': mktime(strptime(
post.find(Disqus.ns + 'createdAt').text, '%Y-%m-%dT%H:%M:%SZ')),
'remote_addr': anonymize(ip.text if ip is not None else '0.0.0.0'),
'mode': 1 if post.find(Disqus.ns + "isDeleted").text == "false" else 4
}
if post.find(Disqus.ns + 'parent') is not None:
item['dsq:parent'] = post.find(
Disqus.ns + 'parent').attrib.get(Disqus.internals + 'id')
res[post.find('%sthread' % Disqus.ns).attrib.get(
Disqus.internals + 'id')].append(item)
progress = Progress(len(tree.findall(Disqus.ns + 'thread')))
for i, thread in enumerate(tree.findall(Disqus.ns + 'thread')):
# Workaround for not crashing with empty thread ids:
thread_id = thread.find(Disqus.ns + 'id')
if not thread_id:
thread_id = dict(text="<empty thread id>", empty=True)
progress.update(i, thread_id.get('text'))
# skip (possibly?) duplicate, but empty thread elements
if thread_id.get('empty') and not self.empty_id:
continue
id = thread.attrib.get(Disqus.internals + 'id')
if id in res:
self.threads.add(id)
self.insert(thread, res[id])
# in case a comment has been deleted (and no further childs)
self.db.comments._remove_stale()
progress.finish("{0} threads, {1} comments".format(
len(self.threads), len(self.comments)))
orphans = set(map(lambda e: e.attrib.get(Disqus.internals + "id"),
tree.findall(Disqus.ns + "post"))) - self.comments
if orphans and not self.threads:
print("Isso couldn't import any thread, try again with --empty-id")
elif orphans:
print("Found %i orphans:" % len(orphans))
for post in tree.findall(Disqus.ns + "post"):
if post.attrib.get(Disqus.internals + "id") not in orphans:
continue
email = post.find("{0}author/{0}email".format(Disqus.ns))
print(" * {0} by {1} <{2}>".format(
post.attrib.get(Disqus.internals + "id"),
post.find("{0}author/{0}name".format(Disqus.ns)).text,
email.text if email is not None else ""))
print(textwrap.fill(post.find(Disqus.ns + "message").text,
initial_indent=" ", subsequent_indent=" "))
print("")
class WordPress(object):
ns = "{http://wordpress.org/export/1.0/}"
def __init__(self, db, xmlfile):
self.db = db
self.xmlfile = xmlfile
self.count = 0
for line in io.open(xmlfile, encoding="utf-8"):
m = WordPress.detect(line)
if m:
self.ns = WordPress.ns.replace("1.0", m.group(1))
break
else:
logger.warning("No WXR namespace found, assuming 1.0")
def insert(self, thread):
url = urlparse(thread.find("link").text)
path = url.path
if url.query:
path += "?" + url.query
self.db.threads.new(path, thread.find("title").text.strip())
comments = list(map(self.Comment, thread.findall(self.ns + "comment")))
comments.sort(key=lambda k: k["id"])
remap = {}
ids = set(c["id"] for c in comments)
self.count += len(ids)
while comments:
for i, item in enumerate(comments):
if item["parent"] in ids:
continue
item["parent"] = remap.get(item["parent"], None)
rv = self.db.comments.add(path, item)
remap[item["id"]] = rv["id"]
ids.remove(item["id"])
comments.pop(i)
break
else:
# should never happen, but... it's WordPress.
return
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
skip = 0
items = tree.findall("channel/item")
progress = Progress(len(items))
for i, thread in enumerate(items):
if thread.find("title").text is None or thread.find(self.ns + "comment") is None:
skip += 1
continue
progress.update(i, thread.find("title").text)
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(
len(items) - skip, self.count))
def Comment(self, el):
return {
"text": strip(el.find(self.ns + "comment_content").text),
"author": strip(el.find(self.ns + "comment_author").text),
"email": strip(el.find(self.ns + "comment_author_email").text),
"website": strip(el.find(self.ns + "comment_author_url").text),
"remote_addr": anonymize(
strip(el.find(self.ns + "comment_author_IP").text)),
"created": mktime(strptime(
strip(el.find(self.ns + "comment_date_gmt").text),
"%Y-%m-%d %H:%M:%S")),
"mode": 1 if el.find(self.ns + "comment_approved").text == "1" else 2,
"id": int(el.find(self.ns + "comment_id").text),
"parent": int(el.find(self.ns + "comment_parent").text) or None
}
@classmethod
def detect(cls, peek):
return re.compile("http://wordpress.org/export/(1\\.\\d)/").search(peek)
class Generic(object):
"""A generic importer.
The source format is a json with the following format:
A list of threads, each item being a dict with the following data:
- id: a text representing the unique thread id
- title: the title of the thread
- comments: the list of comments
Each item in that list of comments is a dict with the following data:
- id: an integer with the unique id of the comment inside the thread (it can be repeated
among different threads); this will be used to order the comment inside the thread
- author: the author's name
- email: the author's email
- website: the author's website
- remote_addr: the author's IP
- created: a timestamp, in the format "%Y-%m-%d %H:%M:%S"
"""
def __init__(self, db, json_file):
self.db = db
self.json_file = json_file
self.count = 0
def insert(self, thread):
"""Process a thread and insert its comments in the DB."""
thread_id = thread['id']
title = thread['title']
self.db.threads.new(thread_id, title)
comments = list(map(self._build_comment, thread['comments']))
comments.sort(key=lambda comment: comment['id'])
self.count += len(comments)
for comment in comments:
self.db.comments.add(thread_id, comment)
def migrate(self):
"""Process the input file and fill the DB."""
with io.open(self.json_file, 'rt', encoding='utf8') as fh:
threads = json.load(fh)
progress = Progress(len(threads))
for i, thread in enumerate(threads):
progress.update(i, str(i))
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(len(threads), self.count))
def _build_comment(self, raw_comment):
return {
"text": raw_comment['text'],
"author": raw_comment['author'],
"email": raw_comment['email'],
"website": raw_comment['website'],
"created": mktime(strptime(raw_comment['created'], "%Y-%m-%d %H:%M:%S")),
"mode": 1,
"id": int(raw_comment['id']),
"parent": None,
"remote_addr": raw_comment["remote_addr"],
}
@classmethod
def detect(cls, peek):
"""Return if peek looks like the beginning of a JSON file.
Note that we can not check the JSON properly as we only receive here
the original file truncated.
"""
return peek.startswith("[{")
def autodetect(peek):
if 'xmlns="http://disqus.com' in peek:
return Disqus
m = WordPress.detect(peek)
if m:
return WordPress
if Generic.detect(peek):
return Generic
return None
def dispatch(type, db, dump, empty_id=False):
if db.execute("SELECT * FROM comments").fetchone():
if input("Isso DB is not empty! Continue? [y/N]: ") not in ("y", "Y"):
raise SystemExit("Abort.")
if type == "disqus":
cls = Disqus
elif type == "wordpress":
cls = WordPress
elif type == "generic":
cls = Generic
else:
with io.open(dump, encoding="utf-8") as fp:
cls = autodetect(fp.read(io.DEFAULT_BUFFER_SIZE))
if cls is None:
raise SystemExit("Unknown format, abort.")
if cls is Disqus:
cls = functools.partial(cls, empty_id=empty_id)
cls(db, dump).migrate()
| # -*- encoding: utf-8 -*-
import functools
import io
import json
import logging
import os
import re
import sys
import textwrap
from collections import defaultdict
from time import mktime, strptime, time
from urllib.parse import urlparse
from xml.etree import ElementTree
from isso.utils import anonymize
logger = logging.getLogger("isso")
def strip(val):
if isinstance(val, (str, )):
return val.strip()
return val
class Progress(object):
def __init__(self, end):
self.end = end or 1
self.istty = sys.stdout.isatty()
self.last = 0
def update(self, i, message):
if not self.istty or message is None:
return
cols = int((os.popen('stty size', 'r').read()).split()[1])
message = message[:cols - 7]
if time() - self.last > 0.2:
sys.stdout.write("\r{0}".format(" " * cols))
sys.stdout.write("\r[{0:.0%}] {1}".format(i / self.end, message))
sys.stdout.flush()
self.last = time()
def finish(self, message):
self.last = 0
self.update(self.end, message + "\n")
class Disqus(object):
ns = '{http://disqus.com}'
internals = '{http://disqus.com/disqus-internals}'
def __init__(self, db, xmlfile, empty_id=False):
self.threads = set([])
self.comments = set([])
self.db = db
self.xmlfile = xmlfile
self.empty_id = empty_id
def insert(self, thread, posts):
path = urlparse(thread.find('%slink' % Disqus.ns).text).path
remap = dict()
if path not in self.db.threads:
thread_title = thread.find(Disqus.ns + 'title').text or ''
self.db.threads.new(path, thread_title.strip())
for item in sorted(posts, key=lambda k: k['created']):
dsq_id = item.pop('dsq:id')
item['parent'] = remap.get(item.pop('dsq:parent', None))
rv = self.db.comments.add(path, item)
remap[dsq_id] = rv["id"]
self.comments.update(set(remap.keys()))
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
res = defaultdict(list)
for post in tree.findall(Disqus.ns + 'post'):
email = post.find('{0}author/{0}email'.format(Disqus.ns))
ip = post.find(Disqus.ns + 'ipAddress')
item = {
'dsq:id': post.attrib.get(Disqus.internals + 'id'),
'text': post.find(Disqus.ns + 'message').text,
'author': post.find('{0}author/{0}name'.format(Disqus.ns)).text,
'email': email.text if email is not None else '',
'created': mktime(strptime(
post.find(Disqus.ns + 'createdAt').text, '%Y-%m-%dT%H:%M:%SZ')),
'remote_addr': anonymize(ip.text if ip is not None else '0.0.0.0'),
'mode': 1 if post.find(Disqus.ns + "isDeleted").text == "false" else 4
}
if post.find(Disqus.ns + 'parent') is not None:
item['dsq:parent'] = post.find(
Disqus.ns + 'parent').attrib.get(Disqus.internals + 'id')
res[post.find('%sthread' % Disqus.ns).attrib.get(
Disqus.internals + 'id')].append(item)
progress = Progress(len(tree.findall(Disqus.ns + 'thread')))
for i, thread in enumerate(tree.findall(Disqus.ns + 'thread')):
# Workaround for not crashing with empty thread ids:
thread_id = thread.find(Disqus.ns + 'id')
if not thread_id:
thread_id = dict(text="<empty thread id>", empty=True)
progress.update(i, thread_id.get('text'))
# skip (possibly?) duplicate, but empty thread elements
if thread_id.get('empty') and not self.empty_id:
continue
id = thread.attrib.get(Disqus.internals + 'id')
if id in res:
self.threads.add(id)
self.insert(thread, res[id])
# in case a comment has been deleted (and no further childs)
self.db.comments._remove_stale()
progress.finish("{0} threads, {1} comments".format(
len(self.threads), len(self.comments)))
orphans = set(map(lambda e: e.attrib.get(Disqus.internals + "id"),
tree.findall(Disqus.ns + "post"))) - self.comments
if orphans and not self.threads:
print("Isso couldn't import any thread, try again with --empty-id")
elif orphans:
print("Found %i orphans:" % len(orphans))
for post in tree.findall(Disqus.ns + "post"):
if post.attrib.get(Disqus.internals + "id") not in orphans:
continue
email = post.find("{0}author/{0}email".format(Disqus.ns))
print(" * {0} by {1} <{2}>".format(
post.attrib.get(Disqus.internals + "id"),
post.find("{0}author/{0}name".format(Disqus.ns)).text,
email.text if email is not None else ""))
print(textwrap.fill(post.find(Disqus.ns + "message").text,
initial_indent=" ", subsequent_indent=" "))
print("")
class WordPress(object):
ns = "{http://wordpress.org/export/1.0/}"
def __init__(self, db, xmlfile):
self.db = db
self.xmlfile = xmlfile
self.count = 0
for line in io.open(xmlfile, encoding="utf-8"):
m = WordPress.detect(line)
if m:
self.ns = WordPress.ns.replace("1.0", m.group(1))
break
else:
logger.warning("No WXR namespace found, assuming 1.0")
def insert(self, thread):
url = urlparse(thread.find("link").text)
path = url.path
if url.query:
path += "?" + url.query
self.db.threads.new(path, thread.find("title").text.strip())
comments = list(map(self.Comment, thread.findall(self.ns + "comment")))
comments.sort(key=lambda k: k["id"])
remap = {}
ids = set(c["id"] for c in comments)
self.count += len(ids)
while comments:
for i, item in enumerate(comments):
if item["parent"] in ids:
continue
item["parent"] = remap.get(item["parent"], None)
rv = self.db.comments.add(path, item)
remap[item["id"]] = rv["id"]
ids.remove(item["id"])
comments.pop(i)
break
else:
# should never happen, but... it's WordPress.
return
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
skip = 0
items = tree.findall("channel/item")
progress = Progress(len(items))
for i, thread in enumerate(items):
if thread.find("title").text is None or thread.find(self.ns + "comment") is None:
skip += 1
continue
progress.update(i, thread.find("title").text)
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(
len(items) - skip, self.count))
def _process_comment_content(self, text):
# WordPress comment text renders a single newline between two blocks of
# text as a <br> tag, so add an explicit Markdown line break on import
# (Otherwise multiple blocks of text separated by single newlines are
# all shown as one long line.)
text = re.sub(r'(?!^\n)\n(?!^\n)', ' \n', text, 0)
return strip(text)
def Comment(self, el):
return {
"text": self._process_comment_content(el.find(self.ns + "comment_content").text),
"author": strip(el.find(self.ns + "comment_author").text),
"email": strip(el.find(self.ns + "comment_author_email").text),
"website": strip(el.find(self.ns + "comment_author_url").text),
"remote_addr": anonymize(
strip(el.find(self.ns + "comment_author_IP").text)),
"created": mktime(strptime(
strip(el.find(self.ns + "comment_date_gmt").text),
"%Y-%m-%d %H:%M:%S")),
"mode": 1 if el.find(self.ns + "comment_approved").text == "1" else 2,
"id": int(el.find(self.ns + "comment_id").text),
"parent": int(el.find(self.ns + "comment_parent").text) or None
}
@classmethod
def detect(cls, peek):
return re.compile("http://wordpress.org/export/(1\\.\\d)/").search(peek)
class Generic(object):
"""A generic importer.
The source format is a json with the following format:
A list of threads, each item being a dict with the following data:
- id: a text representing the unique thread id
- title: the title of the thread
- comments: the list of comments
Each item in that list of comments is a dict with the following data:
- id: an integer with the unique id of the comment inside the thread (it can be repeated
among different threads); this will be used to order the comment inside the thread
- author: the author's name
- email: the author's email
- website: the author's website
- remote_addr: the author's IP
- created: a timestamp, in the format "%Y-%m-%d %H:%M:%S"
"""
def __init__(self, db, json_file):
self.db = db
self.json_file = json_file
self.count = 0
def insert(self, thread):
"""Process a thread and insert its comments in the DB."""
thread_id = thread['id']
title = thread['title']
self.db.threads.new(thread_id, title)
comments = list(map(self._build_comment, thread['comments']))
comments.sort(key=lambda comment: comment['id'])
self.count += len(comments)
for comment in comments:
self.db.comments.add(thread_id, comment)
def migrate(self):
"""Process the input file and fill the DB."""
with io.open(self.json_file, 'rt', encoding='utf8') as fh:
threads = json.load(fh)
progress = Progress(len(threads))
for i, thread in enumerate(threads):
progress.update(i, str(i))
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(len(threads), self.count))
def _build_comment(self, raw_comment):
return {
"text": raw_comment['text'],
"author": raw_comment['author'],
"email": raw_comment['email'],
"website": raw_comment['website'],
"created": mktime(strptime(raw_comment['created'], "%Y-%m-%d %H:%M:%S")),
"mode": 1,
"id": int(raw_comment['id']),
"parent": None,
"remote_addr": raw_comment["remote_addr"],
}
@classmethod
def detect(cls, peek):
"""Return if peek looks like the beginning of a JSON file.
Note that we can not check the JSON properly as we only receive here
the original file truncated.
"""
return peek.startswith("[{")
def autodetect(peek):
if 'xmlns="http://disqus.com' in peek:
return Disqus
m = WordPress.detect(peek)
if m:
return WordPress
if Generic.detect(peek):
return Generic
return None
def dispatch(type, db, dump, empty_id=False):
if db.execute("SELECT * FROM comments").fetchone():
if input("Isso DB is not empty! Continue? [y/N]: ") not in ("y", "Y"):
raise SystemExit("Abort.")
if type == "disqus":
cls = Disqus
elif type == "wordpress":
cls = WordPress
elif type == "generic":
cls = Generic
else:
with io.open(dump, encoding="utf-8") as fp:
cls = autodetect(fp.read(io.DEFAULT_BUFFER_SIZE))
if cls is None:
raise SystemExit("Unknown format, abort.")
if cls is Disqus:
cls = functools.partial(cls, empty_id=empty_id)
cls(db, dump).migrate()
| projectgus | b1021d1e57dd595a581a614ecd26edea4ae69557 | 30ef2180f70ebd58b929673e27d73d889442eb99 | ```suggestion
text = re.sub(r'(?!^\n)\n(?!^\n)', ' \n', text, 0)
``` | ix5 | 3 |
posativ/isso | 903 | migrate: Handle single newlines in WordPress comments as line breaks | WordPress renders a single newline in a comment as a <br> tag, but Isso renders a single newline in the comment as a single newline in the HTML. This is rendered the same as if it was a space, all text on one line.
To fix, detect single newlines when importing WordPress comments and convert to a line break in Markdown. Add a test for this also.
Example, this WordPress comment (as shown in CDATA of XML export):
> First line of comment.
> Second line of comment.
Renders in WordPress as:
> First line of comment.<br>Second line of comment.
But renders in Isso after import as if it was:
> First line of comment. Second line of comment.
After this commit is applied and comments re-imported, it renders as:
> First line of comment.
> Second line of comment.
<!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [x] (If adding features:) I have added tests to cover my changes
- (N/A) (If docs changes needed:) I have updated the **documentation** accordingly.
- (N/A but please tell me if you disagree) I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message --> | null | 2022-06-06 23:04:49+00:00 | 2022-06-12 10:46:19+00:00 | isso/migrate.py | # -*- encoding: utf-8 -*-
import functools
import io
import json
import logging
import os
import re
import sys
import textwrap
from collections import defaultdict
from time import mktime, strptime, time
from urllib.parse import urlparse
from xml.etree import ElementTree
from isso.utils import anonymize
logger = logging.getLogger("isso")
def strip(val):
if isinstance(val, (str, )):
return val.strip()
return val
class Progress(object):
def __init__(self, end):
self.end = end or 1
self.istty = sys.stdout.isatty()
self.last = 0
def update(self, i, message):
if not self.istty or message is None:
return
cols = int((os.popen('stty size', 'r').read()).split()[1])
message = message[:cols - 7]
if time() - self.last > 0.2:
sys.stdout.write("\r{0}".format(" " * cols))
sys.stdout.write("\r[{0:.0%}] {1}".format(i / self.end, message))
sys.stdout.flush()
self.last = time()
def finish(self, message):
self.last = 0
self.update(self.end, message + "\n")
class Disqus(object):
ns = '{http://disqus.com}'
internals = '{http://disqus.com/disqus-internals}'
def __init__(self, db, xmlfile, empty_id=False):
self.threads = set([])
self.comments = set([])
self.db = db
self.xmlfile = xmlfile
self.empty_id = empty_id
def insert(self, thread, posts):
path = urlparse(thread.find('%slink' % Disqus.ns).text).path
remap = dict()
if path not in self.db.threads:
thread_title = thread.find(Disqus.ns + 'title').text or ''
self.db.threads.new(path, thread_title.strip())
for item in sorted(posts, key=lambda k: k['created']):
dsq_id = item.pop('dsq:id')
item['parent'] = remap.get(item.pop('dsq:parent', None))
rv = self.db.comments.add(path, item)
remap[dsq_id] = rv["id"]
self.comments.update(set(remap.keys()))
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
res = defaultdict(list)
for post in tree.findall(Disqus.ns + 'post'):
email = post.find('{0}author/{0}email'.format(Disqus.ns))
ip = post.find(Disqus.ns + 'ipAddress')
item = {
'dsq:id': post.attrib.get(Disqus.internals + 'id'),
'text': post.find(Disqus.ns + 'message').text,
'author': post.find('{0}author/{0}name'.format(Disqus.ns)).text,
'email': email.text if email is not None else '',
'created': mktime(strptime(
post.find(Disqus.ns + 'createdAt').text, '%Y-%m-%dT%H:%M:%SZ')),
'remote_addr': anonymize(ip.text if ip is not None else '0.0.0.0'),
'mode': 1 if post.find(Disqus.ns + "isDeleted").text == "false" else 4
}
if post.find(Disqus.ns + 'parent') is not None:
item['dsq:parent'] = post.find(
Disqus.ns + 'parent').attrib.get(Disqus.internals + 'id')
res[post.find('%sthread' % Disqus.ns).attrib.get(
Disqus.internals + 'id')].append(item)
progress = Progress(len(tree.findall(Disqus.ns + 'thread')))
for i, thread in enumerate(tree.findall(Disqus.ns + 'thread')):
# Workaround for not crashing with empty thread ids:
thread_id = thread.find(Disqus.ns + 'id')
if not thread_id:
thread_id = dict(text="<empty thread id>", empty=True)
progress.update(i, thread_id.get('text'))
# skip (possibly?) duplicate, but empty thread elements
if thread_id.get('empty') and not self.empty_id:
continue
id = thread.attrib.get(Disqus.internals + 'id')
if id in res:
self.threads.add(id)
self.insert(thread, res[id])
# in case a comment has been deleted (and no further childs)
self.db.comments._remove_stale()
progress.finish("{0} threads, {1} comments".format(
len(self.threads), len(self.comments)))
orphans = set(map(lambda e: e.attrib.get(Disqus.internals + "id"),
tree.findall(Disqus.ns + "post"))) - self.comments
if orphans and not self.threads:
print("Isso couldn't import any thread, try again with --empty-id")
elif orphans:
print("Found %i orphans:" % len(orphans))
for post in tree.findall(Disqus.ns + "post"):
if post.attrib.get(Disqus.internals + "id") not in orphans:
continue
email = post.find("{0}author/{0}email".format(Disqus.ns))
print(" * {0} by {1} <{2}>".format(
post.attrib.get(Disqus.internals + "id"),
post.find("{0}author/{0}name".format(Disqus.ns)).text,
email.text if email is not None else ""))
print(textwrap.fill(post.find(Disqus.ns + "message").text,
initial_indent=" ", subsequent_indent=" "))
print("")
class WordPress(object):
ns = "{http://wordpress.org/export/1.0/}"
def __init__(self, db, xmlfile):
self.db = db
self.xmlfile = xmlfile
self.count = 0
for line in io.open(xmlfile, encoding="utf-8"):
m = WordPress.detect(line)
if m:
self.ns = WordPress.ns.replace("1.0", m.group(1))
break
else:
logger.warning("No WXR namespace found, assuming 1.0")
def insert(self, thread):
url = urlparse(thread.find("link").text)
path = url.path
if url.query:
path += "?" + url.query
self.db.threads.new(path, thread.find("title").text.strip())
comments = list(map(self.Comment, thread.findall(self.ns + "comment")))
comments.sort(key=lambda k: k["id"])
remap = {}
ids = set(c["id"] for c in comments)
self.count += len(ids)
while comments:
for i, item in enumerate(comments):
if item["parent"] in ids:
continue
item["parent"] = remap.get(item["parent"], None)
rv = self.db.comments.add(path, item)
remap[item["id"]] = rv["id"]
ids.remove(item["id"])
comments.pop(i)
break
else:
# should never happen, but... it's WordPress.
return
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
skip = 0
items = tree.findall("channel/item")
progress = Progress(len(items))
for i, thread in enumerate(items):
if thread.find("title").text is None or thread.find(self.ns + "comment") is None:
skip += 1
continue
progress.update(i, thread.find("title").text)
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(
len(items) - skip, self.count))
def Comment(self, el):
return {
"text": strip(el.find(self.ns + "comment_content").text),
"author": strip(el.find(self.ns + "comment_author").text),
"email": strip(el.find(self.ns + "comment_author_email").text),
"website": strip(el.find(self.ns + "comment_author_url").text),
"remote_addr": anonymize(
strip(el.find(self.ns + "comment_author_IP").text)),
"created": mktime(strptime(
strip(el.find(self.ns + "comment_date_gmt").text),
"%Y-%m-%d %H:%M:%S")),
"mode": 1 if el.find(self.ns + "comment_approved").text == "1" else 2,
"id": int(el.find(self.ns + "comment_id").text),
"parent": int(el.find(self.ns + "comment_parent").text) or None
}
@classmethod
def detect(cls, peek):
return re.compile("http://wordpress.org/export/(1\\.\\d)/").search(peek)
class Generic(object):
"""A generic importer.
The source format is a json with the following format:
A list of threads, each item being a dict with the following data:
- id: a text representing the unique thread id
- title: the title of the thread
- comments: the list of comments
Each item in that list of comments is a dict with the following data:
- id: an integer with the unique id of the comment inside the thread (it can be repeated
among different threads); this will be used to order the comment inside the thread
- author: the author's name
- email: the author's email
- website: the author's website
- remote_addr: the author's IP
- created: a timestamp, in the format "%Y-%m-%d %H:%M:%S"
"""
def __init__(self, db, json_file):
self.db = db
self.json_file = json_file
self.count = 0
def insert(self, thread):
"""Process a thread and insert its comments in the DB."""
thread_id = thread['id']
title = thread['title']
self.db.threads.new(thread_id, title)
comments = list(map(self._build_comment, thread['comments']))
comments.sort(key=lambda comment: comment['id'])
self.count += len(comments)
for comment in comments:
self.db.comments.add(thread_id, comment)
def migrate(self):
"""Process the input file and fill the DB."""
with io.open(self.json_file, 'rt', encoding='utf8') as fh:
threads = json.load(fh)
progress = Progress(len(threads))
for i, thread in enumerate(threads):
progress.update(i, str(i))
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(len(threads), self.count))
def _build_comment(self, raw_comment):
return {
"text": raw_comment['text'],
"author": raw_comment['author'],
"email": raw_comment['email'],
"website": raw_comment['website'],
"created": mktime(strptime(raw_comment['created'], "%Y-%m-%d %H:%M:%S")),
"mode": 1,
"id": int(raw_comment['id']),
"parent": None,
"remote_addr": raw_comment["remote_addr"],
}
@classmethod
def detect(cls, peek):
"""Return if peek looks like the beginning of a JSON file.
Note that we can not check the JSON properly as we only receive here
the original file truncated.
"""
return peek.startswith("[{")
def autodetect(peek):
if 'xmlns="http://disqus.com' in peek:
return Disqus
m = WordPress.detect(peek)
if m:
return WordPress
if Generic.detect(peek):
return Generic
return None
def dispatch(type, db, dump, empty_id=False):
if db.execute("SELECT * FROM comments").fetchone():
if input("Isso DB is not empty! Continue? [y/N]: ") not in ("y", "Y"):
raise SystemExit("Abort.")
if type == "disqus":
cls = Disqus
elif type == "wordpress":
cls = WordPress
elif type == "generic":
cls = Generic
else:
with io.open(dump, encoding="utf-8") as fp:
cls = autodetect(fp.read(io.DEFAULT_BUFFER_SIZE))
if cls is None:
raise SystemExit("Unknown format, abort.")
if cls is Disqus:
cls = functools.partial(cls, empty_id=empty_id)
cls(db, dump).migrate()
| # -*- encoding: utf-8 -*-
import functools
import io
import json
import logging
import os
import re
import sys
import textwrap
from collections import defaultdict
from time import mktime, strptime, time
from urllib.parse import urlparse
from xml.etree import ElementTree
from isso.utils import anonymize
logger = logging.getLogger("isso")
def strip(val):
if isinstance(val, (str, )):
return val.strip()
return val
class Progress(object):
def __init__(self, end):
self.end = end or 1
self.istty = sys.stdout.isatty()
self.last = 0
def update(self, i, message):
if not self.istty or message is None:
return
cols = int((os.popen('stty size', 'r').read()).split()[1])
message = message[:cols - 7]
if time() - self.last > 0.2:
sys.stdout.write("\r{0}".format(" " * cols))
sys.stdout.write("\r[{0:.0%}] {1}".format(i / self.end, message))
sys.stdout.flush()
self.last = time()
def finish(self, message):
self.last = 0
self.update(self.end, message + "\n")
class Disqus(object):
ns = '{http://disqus.com}'
internals = '{http://disqus.com/disqus-internals}'
def __init__(self, db, xmlfile, empty_id=False):
self.threads = set([])
self.comments = set([])
self.db = db
self.xmlfile = xmlfile
self.empty_id = empty_id
def insert(self, thread, posts):
path = urlparse(thread.find('%slink' % Disqus.ns).text).path
remap = dict()
if path not in self.db.threads:
thread_title = thread.find(Disqus.ns + 'title').text or ''
self.db.threads.new(path, thread_title.strip())
for item in sorted(posts, key=lambda k: k['created']):
dsq_id = item.pop('dsq:id')
item['parent'] = remap.get(item.pop('dsq:parent', None))
rv = self.db.comments.add(path, item)
remap[dsq_id] = rv["id"]
self.comments.update(set(remap.keys()))
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
res = defaultdict(list)
for post in tree.findall(Disqus.ns + 'post'):
email = post.find('{0}author/{0}email'.format(Disqus.ns))
ip = post.find(Disqus.ns + 'ipAddress')
item = {
'dsq:id': post.attrib.get(Disqus.internals + 'id'),
'text': post.find(Disqus.ns + 'message').text,
'author': post.find('{0}author/{0}name'.format(Disqus.ns)).text,
'email': email.text if email is not None else '',
'created': mktime(strptime(
post.find(Disqus.ns + 'createdAt').text, '%Y-%m-%dT%H:%M:%SZ')),
'remote_addr': anonymize(ip.text if ip is not None else '0.0.0.0'),
'mode': 1 if post.find(Disqus.ns + "isDeleted").text == "false" else 4
}
if post.find(Disqus.ns + 'parent') is not None:
item['dsq:parent'] = post.find(
Disqus.ns + 'parent').attrib.get(Disqus.internals + 'id')
res[post.find('%sthread' % Disqus.ns).attrib.get(
Disqus.internals + 'id')].append(item)
progress = Progress(len(tree.findall(Disqus.ns + 'thread')))
for i, thread in enumerate(tree.findall(Disqus.ns + 'thread')):
# Workaround for not crashing with empty thread ids:
thread_id = thread.find(Disqus.ns + 'id')
if not thread_id:
thread_id = dict(text="<empty thread id>", empty=True)
progress.update(i, thread_id.get('text'))
# skip (possibly?) duplicate, but empty thread elements
if thread_id.get('empty') and not self.empty_id:
continue
id = thread.attrib.get(Disqus.internals + 'id')
if id in res:
self.threads.add(id)
self.insert(thread, res[id])
# in case a comment has been deleted (and no further childs)
self.db.comments._remove_stale()
progress.finish("{0} threads, {1} comments".format(
len(self.threads), len(self.comments)))
orphans = set(map(lambda e: e.attrib.get(Disqus.internals + "id"),
tree.findall(Disqus.ns + "post"))) - self.comments
if orphans and not self.threads:
print("Isso couldn't import any thread, try again with --empty-id")
elif orphans:
print("Found %i orphans:" % len(orphans))
for post in tree.findall(Disqus.ns + "post"):
if post.attrib.get(Disqus.internals + "id") not in orphans:
continue
email = post.find("{0}author/{0}email".format(Disqus.ns))
print(" * {0} by {1} <{2}>".format(
post.attrib.get(Disqus.internals + "id"),
post.find("{0}author/{0}name".format(Disqus.ns)).text,
email.text if email is not None else ""))
print(textwrap.fill(post.find(Disqus.ns + "message").text,
initial_indent=" ", subsequent_indent=" "))
print("")
class WordPress(object):
ns = "{http://wordpress.org/export/1.0/}"
def __init__(self, db, xmlfile):
self.db = db
self.xmlfile = xmlfile
self.count = 0
for line in io.open(xmlfile, encoding="utf-8"):
m = WordPress.detect(line)
if m:
self.ns = WordPress.ns.replace("1.0", m.group(1))
break
else:
logger.warning("No WXR namespace found, assuming 1.0")
def insert(self, thread):
url = urlparse(thread.find("link").text)
path = url.path
if url.query:
path += "?" + url.query
self.db.threads.new(path, thread.find("title").text.strip())
comments = list(map(self.Comment, thread.findall(self.ns + "comment")))
comments.sort(key=lambda k: k["id"])
remap = {}
ids = set(c["id"] for c in comments)
self.count += len(ids)
while comments:
for i, item in enumerate(comments):
if item["parent"] in ids:
continue
item["parent"] = remap.get(item["parent"], None)
rv = self.db.comments.add(path, item)
remap[item["id"]] = rv["id"]
ids.remove(item["id"])
comments.pop(i)
break
else:
# should never happen, but... it's WordPress.
return
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
skip = 0
items = tree.findall("channel/item")
progress = Progress(len(items))
for i, thread in enumerate(items):
if thread.find("title").text is None or thread.find(self.ns + "comment") is None:
skip += 1
continue
progress.update(i, thread.find("title").text)
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(
len(items) - skip, self.count))
def _process_comment_content(self, text):
# WordPress comment text renders a single newline between two blocks of
# text as a <br> tag, so add an explicit Markdown line break on import
# (Otherwise multiple blocks of text separated by single newlines are
# all shown as one long line.)
text = re.sub(r'(?!^\n)\n(?!^\n)', ' \n', text, 0)
return strip(text)
def Comment(self, el):
return {
"text": self._process_comment_content(el.find(self.ns + "comment_content").text),
"author": strip(el.find(self.ns + "comment_author").text),
"email": strip(el.find(self.ns + "comment_author_email").text),
"website": strip(el.find(self.ns + "comment_author_url").text),
"remote_addr": anonymize(
strip(el.find(self.ns + "comment_author_IP").text)),
"created": mktime(strptime(
strip(el.find(self.ns + "comment_date_gmt").text),
"%Y-%m-%d %H:%M:%S")),
"mode": 1 if el.find(self.ns + "comment_approved").text == "1" else 2,
"id": int(el.find(self.ns + "comment_id").text),
"parent": int(el.find(self.ns + "comment_parent").text) or None
}
@classmethod
def detect(cls, peek):
return re.compile("http://wordpress.org/export/(1\\.\\d)/").search(peek)
class Generic(object):
"""A generic importer.
The source format is a json with the following format:
A list of threads, each item being a dict with the following data:
- id: a text representing the unique thread id
- title: the title of the thread
- comments: the list of comments
Each item in that list of comments is a dict with the following data:
- id: an integer with the unique id of the comment inside the thread (it can be repeated
among different threads); this will be used to order the comment inside the thread
- author: the author's name
- email: the author's email
- website: the author's website
- remote_addr: the author's IP
- created: a timestamp, in the format "%Y-%m-%d %H:%M:%S"
"""
def __init__(self, db, json_file):
self.db = db
self.json_file = json_file
self.count = 0
def insert(self, thread):
"""Process a thread and insert its comments in the DB."""
thread_id = thread['id']
title = thread['title']
self.db.threads.new(thread_id, title)
comments = list(map(self._build_comment, thread['comments']))
comments.sort(key=lambda comment: comment['id'])
self.count += len(comments)
for comment in comments:
self.db.comments.add(thread_id, comment)
def migrate(self):
"""Process the input file and fill the DB."""
with io.open(self.json_file, 'rt', encoding='utf8') as fh:
threads = json.load(fh)
progress = Progress(len(threads))
for i, thread in enumerate(threads):
progress.update(i, str(i))
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(len(threads), self.count))
def _build_comment(self, raw_comment):
return {
"text": raw_comment['text'],
"author": raw_comment['author'],
"email": raw_comment['email'],
"website": raw_comment['website'],
"created": mktime(strptime(raw_comment['created'], "%Y-%m-%d %H:%M:%S")),
"mode": 1,
"id": int(raw_comment['id']),
"parent": None,
"remote_addr": raw_comment["remote_addr"],
}
@classmethod
def detect(cls, peek):
"""Return if peek looks like the beginning of a JSON file.
Note that we can not check the JSON properly as we only receive here
the original file truncated.
"""
return peek.startswith("[{")
def autodetect(peek):
if 'xmlns="http://disqus.com' in peek:
return Disqus
m = WordPress.detect(peek)
if m:
return WordPress
if Generic.detect(peek):
return Generic
return None
def dispatch(type, db, dump, empty_id=False):
if db.execute("SELECT * FROM comments").fetchone():
if input("Isso DB is not empty! Continue? [y/N]: ") not in ("y", "Y"):
raise SystemExit("Abort.")
if type == "disqus":
cls = Disqus
elif type == "wordpress":
cls = WordPress
elif type == "generic":
cls = Generic
else:
with io.open(dump, encoding="utf-8") as fp:
cls = autodetect(fp.read(io.DEFAULT_BUFFER_SIZE))
if cls is None:
raise SystemExit("Unknown format, abort.")
if cls is Disqus:
cls = functools.partial(cls, empty_id=empty_id)
cls(db, dump).migrate()
| projectgus | b1021d1e57dd595a581a614ecd26edea4ae69557 | 30ef2180f70ebd58b929673e27d73d889442eb99 | I meant literally that you should insert two spaces at the end of the line. E.g. this:
```markdown
This is a text with 2 trailing spaces
Next line
Two newlines should not be affected.
```
will render as
```html
<p>This is a text with 2 trailing spaces<br>
Next Line</p>
<p>Two newlines should not be affected</p>
``` | ix5 | 4 |
posativ/isso | 903 | migrate: Handle single newlines in WordPress comments as line breaks | WordPress renders a single newline in a comment as a <br> tag, but Isso renders a single newline in the comment as a single newline in the HTML. This is rendered the same as if it was a space, all text on one line.
To fix, detect single newlines when importing WordPress comments and convert to a line break in Markdown. Add a test for this also.
Example, this WordPress comment (as shown in CDATA of XML export):
> First line of comment.
> Second line of comment.
Renders in WordPress as:
> First line of comment.<br>Second line of comment.
But renders in Isso after import as if it was:
> First line of comment. Second line of comment.
After this commit is applied and comments re-imported, it renders as:
> First line of comment.
> Second line of comment.
<!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [x] (If adding features:) I have added tests to cover my changes
- (N/A) (If docs changes needed:) I have updated the **documentation** accordingly.
- (N/A but please tell me if you disagree) I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message --> | null | 2022-06-06 23:04:49+00:00 | 2022-06-12 10:46:19+00:00 | isso/migrate.py | # -*- encoding: utf-8 -*-
import functools
import io
import json
import logging
import os
import re
import sys
import textwrap
from collections import defaultdict
from time import mktime, strptime, time
from urllib.parse import urlparse
from xml.etree import ElementTree
from isso.utils import anonymize
logger = logging.getLogger("isso")
def strip(val):
if isinstance(val, (str, )):
return val.strip()
return val
class Progress(object):
def __init__(self, end):
self.end = end or 1
self.istty = sys.stdout.isatty()
self.last = 0
def update(self, i, message):
if not self.istty or message is None:
return
cols = int((os.popen('stty size', 'r').read()).split()[1])
message = message[:cols - 7]
if time() - self.last > 0.2:
sys.stdout.write("\r{0}".format(" " * cols))
sys.stdout.write("\r[{0:.0%}] {1}".format(i / self.end, message))
sys.stdout.flush()
self.last = time()
def finish(self, message):
self.last = 0
self.update(self.end, message + "\n")
class Disqus(object):
ns = '{http://disqus.com}'
internals = '{http://disqus.com/disqus-internals}'
def __init__(self, db, xmlfile, empty_id=False):
self.threads = set([])
self.comments = set([])
self.db = db
self.xmlfile = xmlfile
self.empty_id = empty_id
def insert(self, thread, posts):
path = urlparse(thread.find('%slink' % Disqus.ns).text).path
remap = dict()
if path not in self.db.threads:
thread_title = thread.find(Disqus.ns + 'title').text or ''
self.db.threads.new(path, thread_title.strip())
for item in sorted(posts, key=lambda k: k['created']):
dsq_id = item.pop('dsq:id')
item['parent'] = remap.get(item.pop('dsq:parent', None))
rv = self.db.comments.add(path, item)
remap[dsq_id] = rv["id"]
self.comments.update(set(remap.keys()))
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
res = defaultdict(list)
for post in tree.findall(Disqus.ns + 'post'):
email = post.find('{0}author/{0}email'.format(Disqus.ns))
ip = post.find(Disqus.ns + 'ipAddress')
item = {
'dsq:id': post.attrib.get(Disqus.internals + 'id'),
'text': post.find(Disqus.ns + 'message').text,
'author': post.find('{0}author/{0}name'.format(Disqus.ns)).text,
'email': email.text if email is not None else '',
'created': mktime(strptime(
post.find(Disqus.ns + 'createdAt').text, '%Y-%m-%dT%H:%M:%SZ')),
'remote_addr': anonymize(ip.text if ip is not None else '0.0.0.0'),
'mode': 1 if post.find(Disqus.ns + "isDeleted").text == "false" else 4
}
if post.find(Disqus.ns + 'parent') is not None:
item['dsq:parent'] = post.find(
Disqus.ns + 'parent').attrib.get(Disqus.internals + 'id')
res[post.find('%sthread' % Disqus.ns).attrib.get(
Disqus.internals + 'id')].append(item)
progress = Progress(len(tree.findall(Disqus.ns + 'thread')))
for i, thread in enumerate(tree.findall(Disqus.ns + 'thread')):
# Workaround for not crashing with empty thread ids:
thread_id = thread.find(Disqus.ns + 'id')
if not thread_id:
thread_id = dict(text="<empty thread id>", empty=True)
progress.update(i, thread_id.get('text'))
# skip (possibly?) duplicate, but empty thread elements
if thread_id.get('empty') and not self.empty_id:
continue
id = thread.attrib.get(Disqus.internals + 'id')
if id in res:
self.threads.add(id)
self.insert(thread, res[id])
# in case a comment has been deleted (and no further childs)
self.db.comments._remove_stale()
progress.finish("{0} threads, {1} comments".format(
len(self.threads), len(self.comments)))
orphans = set(map(lambda e: e.attrib.get(Disqus.internals + "id"),
tree.findall(Disqus.ns + "post"))) - self.comments
if orphans and not self.threads:
print("Isso couldn't import any thread, try again with --empty-id")
elif orphans:
print("Found %i orphans:" % len(orphans))
for post in tree.findall(Disqus.ns + "post"):
if post.attrib.get(Disqus.internals + "id") not in orphans:
continue
email = post.find("{0}author/{0}email".format(Disqus.ns))
print(" * {0} by {1} <{2}>".format(
post.attrib.get(Disqus.internals + "id"),
post.find("{0}author/{0}name".format(Disqus.ns)).text,
email.text if email is not None else ""))
print(textwrap.fill(post.find(Disqus.ns + "message").text,
initial_indent=" ", subsequent_indent=" "))
print("")
class WordPress(object):
ns = "{http://wordpress.org/export/1.0/}"
def __init__(self, db, xmlfile):
self.db = db
self.xmlfile = xmlfile
self.count = 0
for line in io.open(xmlfile, encoding="utf-8"):
m = WordPress.detect(line)
if m:
self.ns = WordPress.ns.replace("1.0", m.group(1))
break
else:
logger.warning("No WXR namespace found, assuming 1.0")
def insert(self, thread):
url = urlparse(thread.find("link").text)
path = url.path
if url.query:
path += "?" + url.query
self.db.threads.new(path, thread.find("title").text.strip())
comments = list(map(self.Comment, thread.findall(self.ns + "comment")))
comments.sort(key=lambda k: k["id"])
remap = {}
ids = set(c["id"] for c in comments)
self.count += len(ids)
while comments:
for i, item in enumerate(comments):
if item["parent"] in ids:
continue
item["parent"] = remap.get(item["parent"], None)
rv = self.db.comments.add(path, item)
remap[item["id"]] = rv["id"]
ids.remove(item["id"])
comments.pop(i)
break
else:
# should never happen, but... it's WordPress.
return
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
skip = 0
items = tree.findall("channel/item")
progress = Progress(len(items))
for i, thread in enumerate(items):
if thread.find("title").text is None or thread.find(self.ns + "comment") is None:
skip += 1
continue
progress.update(i, thread.find("title").text)
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(
len(items) - skip, self.count))
def Comment(self, el):
return {
"text": strip(el.find(self.ns + "comment_content").text),
"author": strip(el.find(self.ns + "comment_author").text),
"email": strip(el.find(self.ns + "comment_author_email").text),
"website": strip(el.find(self.ns + "comment_author_url").text),
"remote_addr": anonymize(
strip(el.find(self.ns + "comment_author_IP").text)),
"created": mktime(strptime(
strip(el.find(self.ns + "comment_date_gmt").text),
"%Y-%m-%d %H:%M:%S")),
"mode": 1 if el.find(self.ns + "comment_approved").text == "1" else 2,
"id": int(el.find(self.ns + "comment_id").text),
"parent": int(el.find(self.ns + "comment_parent").text) or None
}
@classmethod
def detect(cls, peek):
return re.compile("http://wordpress.org/export/(1\\.\\d)/").search(peek)
class Generic(object):
"""A generic importer.
The source format is a json with the following format:
A list of threads, each item being a dict with the following data:
- id: a text representing the unique thread id
- title: the title of the thread
- comments: the list of comments
Each item in that list of comments is a dict with the following data:
- id: an integer with the unique id of the comment inside the thread (it can be repeated
among different threads); this will be used to order the comment inside the thread
- author: the author's name
- email: the author's email
- website: the author's website
- remote_addr: the author's IP
- created: a timestamp, in the format "%Y-%m-%d %H:%M:%S"
"""
def __init__(self, db, json_file):
self.db = db
self.json_file = json_file
self.count = 0
def insert(self, thread):
"""Process a thread and insert its comments in the DB."""
thread_id = thread['id']
title = thread['title']
self.db.threads.new(thread_id, title)
comments = list(map(self._build_comment, thread['comments']))
comments.sort(key=lambda comment: comment['id'])
self.count += len(comments)
for comment in comments:
self.db.comments.add(thread_id, comment)
def migrate(self):
"""Process the input file and fill the DB."""
with io.open(self.json_file, 'rt', encoding='utf8') as fh:
threads = json.load(fh)
progress = Progress(len(threads))
for i, thread in enumerate(threads):
progress.update(i, str(i))
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(len(threads), self.count))
def _build_comment(self, raw_comment):
return {
"text": raw_comment['text'],
"author": raw_comment['author'],
"email": raw_comment['email'],
"website": raw_comment['website'],
"created": mktime(strptime(raw_comment['created'], "%Y-%m-%d %H:%M:%S")),
"mode": 1,
"id": int(raw_comment['id']),
"parent": None,
"remote_addr": raw_comment["remote_addr"],
}
@classmethod
def detect(cls, peek):
"""Return if peek looks like the beginning of a JSON file.
Note that we can not check the JSON properly as we only receive here
the original file truncated.
"""
return peek.startswith("[{")
def autodetect(peek):
if 'xmlns="http://disqus.com' in peek:
return Disqus
m = WordPress.detect(peek)
if m:
return WordPress
if Generic.detect(peek):
return Generic
return None
def dispatch(type, db, dump, empty_id=False):
if db.execute("SELECT * FROM comments").fetchone():
if input("Isso DB is not empty! Continue? [y/N]: ") not in ("y", "Y"):
raise SystemExit("Abort.")
if type == "disqus":
cls = Disqus
elif type == "wordpress":
cls = WordPress
elif type == "generic":
cls = Generic
else:
with io.open(dump, encoding="utf-8") as fp:
cls = autodetect(fp.read(io.DEFAULT_BUFFER_SIZE))
if cls is None:
raise SystemExit("Unknown format, abort.")
if cls is Disqus:
cls = functools.partial(cls, empty_id=empty_id)
cls(db, dump).migrate()
| # -*- encoding: utf-8 -*-
import functools
import io
import json
import logging
import os
import re
import sys
import textwrap
from collections import defaultdict
from time import mktime, strptime, time
from urllib.parse import urlparse
from xml.etree import ElementTree
from isso.utils import anonymize
logger = logging.getLogger("isso")
def strip(val):
if isinstance(val, (str, )):
return val.strip()
return val
class Progress(object):
def __init__(self, end):
self.end = end or 1
self.istty = sys.stdout.isatty()
self.last = 0
def update(self, i, message):
if not self.istty or message is None:
return
cols = int((os.popen('stty size', 'r').read()).split()[1])
message = message[:cols - 7]
if time() - self.last > 0.2:
sys.stdout.write("\r{0}".format(" " * cols))
sys.stdout.write("\r[{0:.0%}] {1}".format(i / self.end, message))
sys.stdout.flush()
self.last = time()
def finish(self, message):
self.last = 0
self.update(self.end, message + "\n")
class Disqus(object):
ns = '{http://disqus.com}'
internals = '{http://disqus.com/disqus-internals}'
def __init__(self, db, xmlfile, empty_id=False):
self.threads = set([])
self.comments = set([])
self.db = db
self.xmlfile = xmlfile
self.empty_id = empty_id
def insert(self, thread, posts):
path = urlparse(thread.find('%slink' % Disqus.ns).text).path
remap = dict()
if path not in self.db.threads:
thread_title = thread.find(Disqus.ns + 'title').text or ''
self.db.threads.new(path, thread_title.strip())
for item in sorted(posts, key=lambda k: k['created']):
dsq_id = item.pop('dsq:id')
item['parent'] = remap.get(item.pop('dsq:parent', None))
rv = self.db.comments.add(path, item)
remap[dsq_id] = rv["id"]
self.comments.update(set(remap.keys()))
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
res = defaultdict(list)
for post in tree.findall(Disqus.ns + 'post'):
email = post.find('{0}author/{0}email'.format(Disqus.ns))
ip = post.find(Disqus.ns + 'ipAddress')
item = {
'dsq:id': post.attrib.get(Disqus.internals + 'id'),
'text': post.find(Disqus.ns + 'message').text,
'author': post.find('{0}author/{0}name'.format(Disqus.ns)).text,
'email': email.text if email is not None else '',
'created': mktime(strptime(
post.find(Disqus.ns + 'createdAt').text, '%Y-%m-%dT%H:%M:%SZ')),
'remote_addr': anonymize(ip.text if ip is not None else '0.0.0.0'),
'mode': 1 if post.find(Disqus.ns + "isDeleted").text == "false" else 4
}
if post.find(Disqus.ns + 'parent') is not None:
item['dsq:parent'] = post.find(
Disqus.ns + 'parent').attrib.get(Disqus.internals + 'id')
res[post.find('%sthread' % Disqus.ns).attrib.get(
Disqus.internals + 'id')].append(item)
progress = Progress(len(tree.findall(Disqus.ns + 'thread')))
for i, thread in enumerate(tree.findall(Disqus.ns + 'thread')):
# Workaround for not crashing with empty thread ids:
thread_id = thread.find(Disqus.ns + 'id')
if not thread_id:
thread_id = dict(text="<empty thread id>", empty=True)
progress.update(i, thread_id.get('text'))
# skip (possibly?) duplicate, but empty thread elements
if thread_id.get('empty') and not self.empty_id:
continue
id = thread.attrib.get(Disqus.internals + 'id')
if id in res:
self.threads.add(id)
self.insert(thread, res[id])
# in case a comment has been deleted (and no further childs)
self.db.comments._remove_stale()
progress.finish("{0} threads, {1} comments".format(
len(self.threads), len(self.comments)))
orphans = set(map(lambda e: e.attrib.get(Disqus.internals + "id"),
tree.findall(Disqus.ns + "post"))) - self.comments
if orphans and not self.threads:
print("Isso couldn't import any thread, try again with --empty-id")
elif orphans:
print("Found %i orphans:" % len(orphans))
for post in tree.findall(Disqus.ns + "post"):
if post.attrib.get(Disqus.internals + "id") not in orphans:
continue
email = post.find("{0}author/{0}email".format(Disqus.ns))
print(" * {0} by {1} <{2}>".format(
post.attrib.get(Disqus.internals + "id"),
post.find("{0}author/{0}name".format(Disqus.ns)).text,
email.text if email is not None else ""))
print(textwrap.fill(post.find(Disqus.ns + "message").text,
initial_indent=" ", subsequent_indent=" "))
print("")
class WordPress(object):
ns = "{http://wordpress.org/export/1.0/}"
def __init__(self, db, xmlfile):
self.db = db
self.xmlfile = xmlfile
self.count = 0
for line in io.open(xmlfile, encoding="utf-8"):
m = WordPress.detect(line)
if m:
self.ns = WordPress.ns.replace("1.0", m.group(1))
break
else:
logger.warning("No WXR namespace found, assuming 1.0")
def insert(self, thread):
url = urlparse(thread.find("link").text)
path = url.path
if url.query:
path += "?" + url.query
self.db.threads.new(path, thread.find("title").text.strip())
comments = list(map(self.Comment, thread.findall(self.ns + "comment")))
comments.sort(key=lambda k: k["id"])
remap = {}
ids = set(c["id"] for c in comments)
self.count += len(ids)
while comments:
for i, item in enumerate(comments):
if item["parent"] in ids:
continue
item["parent"] = remap.get(item["parent"], None)
rv = self.db.comments.add(path, item)
remap[item["id"]] = rv["id"]
ids.remove(item["id"])
comments.pop(i)
break
else:
# should never happen, but... it's WordPress.
return
def migrate(self):
tree = ElementTree.parse(self.xmlfile)
skip = 0
items = tree.findall("channel/item")
progress = Progress(len(items))
for i, thread in enumerate(items):
if thread.find("title").text is None or thread.find(self.ns + "comment") is None:
skip += 1
continue
progress.update(i, thread.find("title").text)
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(
len(items) - skip, self.count))
def _process_comment_content(self, text):
# WordPress comment text renders a single newline between two blocks of
# text as a <br> tag, so add an explicit Markdown line break on import
# (Otherwise multiple blocks of text separated by single newlines are
# all shown as one long line.)
text = re.sub(r'(?!^\n)\n(?!^\n)', ' \n', text, 0)
return strip(text)
def Comment(self, el):
return {
"text": self._process_comment_content(el.find(self.ns + "comment_content").text),
"author": strip(el.find(self.ns + "comment_author").text),
"email": strip(el.find(self.ns + "comment_author_email").text),
"website": strip(el.find(self.ns + "comment_author_url").text),
"remote_addr": anonymize(
strip(el.find(self.ns + "comment_author_IP").text)),
"created": mktime(strptime(
strip(el.find(self.ns + "comment_date_gmt").text),
"%Y-%m-%d %H:%M:%S")),
"mode": 1 if el.find(self.ns + "comment_approved").text == "1" else 2,
"id": int(el.find(self.ns + "comment_id").text),
"parent": int(el.find(self.ns + "comment_parent").text) or None
}
@classmethod
def detect(cls, peek):
return re.compile("http://wordpress.org/export/(1\\.\\d)/").search(peek)
class Generic(object):
"""A generic importer.
The source format is a json with the following format:
A list of threads, each item being a dict with the following data:
- id: a text representing the unique thread id
- title: the title of the thread
- comments: the list of comments
Each item in that list of comments is a dict with the following data:
- id: an integer with the unique id of the comment inside the thread (it can be repeated
among different threads); this will be used to order the comment inside the thread
- author: the author's name
- email: the author's email
- website: the author's website
- remote_addr: the author's IP
- created: a timestamp, in the format "%Y-%m-%d %H:%M:%S"
"""
def __init__(self, db, json_file):
self.db = db
self.json_file = json_file
self.count = 0
def insert(self, thread):
"""Process a thread and insert its comments in the DB."""
thread_id = thread['id']
title = thread['title']
self.db.threads.new(thread_id, title)
comments = list(map(self._build_comment, thread['comments']))
comments.sort(key=lambda comment: comment['id'])
self.count += len(comments)
for comment in comments:
self.db.comments.add(thread_id, comment)
def migrate(self):
"""Process the input file and fill the DB."""
with io.open(self.json_file, 'rt', encoding='utf8') as fh:
threads = json.load(fh)
progress = Progress(len(threads))
for i, thread in enumerate(threads):
progress.update(i, str(i))
self.insert(thread)
progress.finish("{0} threads, {1} comments".format(len(threads), self.count))
def _build_comment(self, raw_comment):
return {
"text": raw_comment['text'],
"author": raw_comment['author'],
"email": raw_comment['email'],
"website": raw_comment['website'],
"created": mktime(strptime(raw_comment['created'], "%Y-%m-%d %H:%M:%S")),
"mode": 1,
"id": int(raw_comment['id']),
"parent": None,
"remote_addr": raw_comment["remote_addr"],
}
@classmethod
def detect(cls, peek):
"""Return if peek looks like the beginning of a JSON file.
Note that we can not check the JSON properly as we only receive here
the original file truncated.
"""
return peek.startswith("[{")
def autodetect(peek):
if 'xmlns="http://disqus.com' in peek:
return Disqus
m = WordPress.detect(peek)
if m:
return WordPress
if Generic.detect(peek):
return Generic
return None
def dispatch(type, db, dump, empty_id=False):
if db.execute("SELECT * FROM comments").fetchone():
if input("Isso DB is not empty! Continue? [y/N]: ") not in ("y", "Y"):
raise SystemExit("Abort.")
if type == "disqus":
cls = Disqus
elif type == "wordpress":
cls = WordPress
elif type == "generic":
cls = Generic
else:
with io.open(dump, encoding="utf-8") as fp:
cls = autodetect(fp.read(io.DEFAULT_BUFFER_SIZE))
if cls is None:
raise SystemExit("Unknown format, abort.")
if cls is Disqus:
cls = functools.partial(cls, empty_id=empty_id)
cls(db, dump).migrate()
| projectgus | b1021d1e57dd595a581a614ecd26edea4ae69557 | 30ef2180f70ebd58b929673e27d73d889442eb99 | @ix5 oh snap, sorry I totally misread your first suggestion. And I clearly need to read the Markdown spec more often!
Updated, thanks for re-explaining. | projectgus | 5 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/css/isso.css | #isso-thread * {
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
}
#isso-thread .isso-comment-header a {
text-decoration: none;
}
#isso-thread {
padding: 0;
margin: 0;
}
#isso-thread > h4 {
color: #555;
font-weight: bold;
}
#isso-thread > .isso-feedlink {
float: right;
padding-left: 1em;
}
#isso-thread > .isso-feedlink > a {
font-size: 0.8em;
vertical-align: bottom;
}
#isso-thread .isso-textarea {
min-height: 58px;
outline: 0;
}
#isso-thread .isso-textarea.isso-placeholder {
color: #757575;
}
#isso-root .isso-comment {
max-width: 68em;
margin: 0 auto;
}
#isso-root .isso-preview .isso-comment {
padding-top: 0;
margin: 0;
}
#isso-root .isso-comment:not(:first-of-type),
.isso-follow-up .isso-comment {
border-top: 1px solid rgba(0, 0, 0, 0.1);
margin-bottom: 0.5em;
}
.isso-comment > .isso-avatar {
display: block;
float: left;
margin: 0.95em 0.95em 0;
}
.isso-comment > .isso-avatar > svg {
max-width: 48px;
max-height: 48px;
width: 100%;
height: 100%;
border: 1px solid rgba(0, 0, 0, 0.2);
border-radius: 3px;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-comment > .isso-text-wrapper {
display: block;
padding: 0.95em;
}
.isso-comment .isso-follow-up {
padding-left: calc(7% + 20px);
}
.isso-comment > .isso-text-wrapper > .isso-comment-header, .isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.95em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header {
font-size: 0.85em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer {
padding: 0 6px;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent {
color: gray !important;
font-weight: normal;
text-shadow: none !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent:hover {
color: #606060 !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note {
float: right;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-author {
font-weight: bold;
color: #555;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-page-author-suffix {
font-weight: bold;
color: #2c2c2c;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p:last-child {
margin-bottom: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text h1,
.isso-comment > .isso-text-wrapper > .isso-text h2,
.isso-comment > .isso-text-wrapper > .isso-text h3,
.isso-comment > .isso-text-wrapper > .isso-text h4,
.isso-comment > .isso-text-wrapper > .isso-text h5,
.isso-comment > .isso-text-wrapper > .isso-text h6 {
font-size: 130%;
font-weight: bold;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
width: 100%;
border: 1px solid #f0f0f0;
border-radius: 2px;
box-shadow: 0 0 2px #888;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.80em;
color: gray !important;
clear: left;
}
.isso-feedlink,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a {
font-weight: bold;
text-decoration: none;
}
.isso-feedlink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a:hover {
color: #111111 !important;
text-shadow: #aaaaaa 0 0 1px !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a {
position: relative;
top: .2em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a + a {
padding-left: 1em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-votes {
color: gray;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-upvote svg,
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-downvote svg {
position: relative;
top: .2em;
}
.isso-comment .isso-postbox {
margin-top: 0.8em;
}
.isso-comment.isso-no-votes > * > .isso-comment-footer span.isso-votes {
display: none;
}
/* "target" means the comment that's being linked to, for example:
* https://example.com/blog/example/#isso-15
*/
.isso-target {
animation: isso-target-fade 5s ease-out;
}
@keyframes isso-target-fade {
0% { background-color: #eee5a1; }
/* This color should be changed when used on a dark background,
* maybe #3f3c1c for example
*/
}
.isso-postbox {
max-width: 68em;
margin: 0 auto 2em;
clear: right;
}
.isso-postbox > .isso-form-wrapper {
display: block;
padding: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section,
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
}
.isso-postbox > .isso-form-wrapper .isso-textarea,
.isso-postbox > .isso-form-wrapper .isso-preview {
margin: 0 0 .3em;
padding: .4em .8em;
border-radius: 3px;
background-color: #fff;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper input[type=checkbox] {
vertical-align: middle;
position: relative;
bottom: 1px;
margin-left: 0;
}
.isso-postbox > .isso-form-wrapper .isso-notification-section {
font-size: 0.90em;
padding-top: .3em;
}
#isso-thread .isso-textarea:focus,
#isso-thread input:focus {
border-color: rgba(0, 0, 0, 0.8);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: inline-block;
position: relative;
max-width: 25%;
margin: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
padding: .3em 10px;
max-width: 100%;
border-radius: 3px;
background-color: #fff;
line-height: 1.4em;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper label {
display: inline-block;
line-height: 1.4em;
height: 1.4em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
float: right;
margin: 1.4em 0 0 5px;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input {
padding: calc(.3em - 1px);
border-radius: 2px;
border: 1px solid #CCC;
background-color: #DDD;
cursor: pointer;
outline: 0;
line-height: 1.4em;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:hover {
background-color: #CCC;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:active {
background-color: #BBB;
}
.isso-postbox > .isso-form-wrapper .isso-preview,
.isso-postbox > .isso-form-wrapper input[name="edit"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="preview"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-textarea {
display: none;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-preview {
display: block;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="edit"] {
display: inline;
}
.isso-postbox > .isso-form-wrapper .isso-preview {
background-color: #f8f8f8;
background: repeating-linear-gradient(
-45deg,
#f8f8f8,
#f8f8f8 10px,
#fff 10px,
#fff 20px
);
}
.isso-postbox > .isso-form-wrapper > .isso-notification-section {
display: none;
padding-bottom: 10px;
}
@media screen and (max-width:600px) {
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: block;
max-width: 100%;
margin: 0 0 .3em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
width: 100%;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
margin-top: 0;
}
}
| #isso-thread * {
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
}
#isso-thread .isso-comment-header a {
text-decoration: none;
}
#isso-thread {
padding: 0;
margin: 0;
}
#isso-thread > h4 {
color: #555;
font-weight: bold;
}
#isso-thread > .isso-feedlink {
float: right;
padding-left: 1em;
}
#isso-thread > .isso-feedlink > a {
font-size: 0.8em;
vertical-align: bottom;
}
#isso-thread .isso-textarea {
outline: 0;
width: 100%;
resize: none;
}
#isso-root .isso-comment {
max-width: 68em;
margin: 0 auto;
}
#isso-root .isso-preview .isso-comment {
padding-top: 0;
margin: 0;
}
#isso-root .isso-comment:not(:first-of-type),
.isso-follow-up .isso-comment {
border-top: 1px solid rgba(0, 0, 0, 0.1);
margin-bottom: 0.5em;
}
.isso-comment > .isso-avatar {
display: block;
float: left;
margin: 0.95em 0.95em 0;
}
.isso-comment > .isso-avatar > svg {
max-width: 48px;
max-height: 48px;
width: 100%;
height: 100%;
border: 1px solid rgba(0, 0, 0, 0.2);
border-radius: 3px;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-comment > .isso-text-wrapper {
display: block;
padding: 0.95em;
}
.isso-comment .isso-follow-up {
padding-left: calc(7% + 20px);
}
.isso-comment > .isso-text-wrapper > .isso-comment-header, .isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.95em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header {
font-size: 0.85em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer {
padding: 0 6px;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent {
color: gray !important;
font-weight: normal;
text-shadow: none !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent:hover {
color: #606060 !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note {
float: right;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-author {
font-weight: bold;
color: #555;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-page-author-suffix {
font-weight: bold;
color: #2c2c2c;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p:last-child {
margin-bottom: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text h1,
.isso-comment > .isso-text-wrapper > .isso-text h2,
.isso-comment > .isso-text-wrapper > .isso-text h3,
.isso-comment > .isso-text-wrapper > .isso-text h4,
.isso-comment > .isso-text-wrapper > .isso-text h5,
.isso-comment > .isso-text-wrapper > .isso-text h6 {
font-size: 130%;
font-weight: bold;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
width: 100%;
border: 1px solid #f0f0f0;
border-radius: 2px;
box-shadow: 0 0 2px #888;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.80em;
color: gray !important;
clear: left;
}
.isso-feedlink,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a {
font-weight: bold;
text-decoration: none;
}
.isso-feedlink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a:hover {
color: #111111 !important;
text-shadow: #aaaaaa 0 0 1px !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a {
position: relative;
top: .2em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a + a {
padding-left: 1em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-votes {
color: gray;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-upvote svg,
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-downvote svg {
position: relative;
top: .2em;
}
.isso-comment .isso-postbox {
margin-top: 0.8em;
}
.isso-comment.isso-no-votes > * > .isso-comment-footer span.isso-votes {
display: none;
}
/* "target" means the comment that's being linked to, for example:
* https://example.com/blog/example/#isso-15
*/
.isso-target {
animation: isso-target-fade 5s ease-out;
}
@keyframes isso-target-fade {
0% { background-color: #eee5a1; }
/* This color should be changed when used on a dark background,
* maybe #3f3c1c for example
*/
}
.isso-postbox {
max-width: 68em;
margin: 0 auto 2em;
clear: right;
}
.isso-postbox > .isso-form-wrapper {
display: block;
padding: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section,
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
}
.isso-postbox > .isso-form-wrapper .isso-textarea,
.isso-postbox > .isso-form-wrapper .isso-preview {
margin: 0 0 .3em;
padding: .4em .8em;
border-radius: 3px;
background-color: #fff;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper input[type=checkbox] {
vertical-align: middle;
position: relative;
bottom: 1px;
margin-left: 0;
}
.isso-postbox > .isso-form-wrapper .isso-notification-section {
font-size: 0.90em;
padding-top: .3em;
}
#isso-thread .isso-textarea:focus,
#isso-thread input:focus {
border-color: rgba(0, 0, 0, 0.8);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: inline-block;
position: relative;
max-width: 25%;
margin: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
padding: .3em 10px;
max-width: 100%;
border-radius: 3px;
background-color: #fff;
line-height: 1.4em;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper label {
display: inline-block;
line-height: 1.4em;
height: 1.4em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
float: right;
margin: 1.4em 0 0 5px;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input {
padding: calc(.3em - 1px);
border-radius: 2px;
border: 1px solid #CCC;
background-color: #DDD;
cursor: pointer;
outline: 0;
line-height: 1.4em;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:hover {
background-color: #CCC;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:active {
background-color: #BBB;
}
.isso-postbox > .isso-form-wrapper .isso-preview,
.isso-postbox > .isso-form-wrapper input[name="edit"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="preview"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-textarea {
display: none;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-preview {
display: block;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="edit"] {
display: inline;
}
.isso-postbox > .isso-form-wrapper .isso-preview {
background-color: #f8f8f8;
background: repeating-linear-gradient(
-45deg,
#f8f8f8,
#f8f8f8 10px,
#fff 10px,
#fff 20px
);
}
.isso-postbox > .isso-form-wrapper > .isso-notification-section {
display: none;
padding-bottom: 10px;
}
@media screen and (max-width:600px) {
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: block;
max-width: 100%;
margin: 0 0 .3em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
width: 100%;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
margin-top: 0;
}
}
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | The `outline: 0` should stay, no? Not that I'm opposed to changing that, but any styling changes should go into another PR. | ix5 | 6 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/css/isso.css | #isso-thread * {
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
}
#isso-thread .isso-comment-header a {
text-decoration: none;
}
#isso-thread {
padding: 0;
margin: 0;
}
#isso-thread > h4 {
color: #555;
font-weight: bold;
}
#isso-thread > .isso-feedlink {
float: right;
padding-left: 1em;
}
#isso-thread > .isso-feedlink > a {
font-size: 0.8em;
vertical-align: bottom;
}
#isso-thread .isso-textarea {
min-height: 58px;
outline: 0;
}
#isso-thread .isso-textarea.isso-placeholder {
color: #757575;
}
#isso-root .isso-comment {
max-width: 68em;
margin: 0 auto;
}
#isso-root .isso-preview .isso-comment {
padding-top: 0;
margin: 0;
}
#isso-root .isso-comment:not(:first-of-type),
.isso-follow-up .isso-comment {
border-top: 1px solid rgba(0, 0, 0, 0.1);
margin-bottom: 0.5em;
}
.isso-comment > .isso-avatar {
display: block;
float: left;
margin: 0.95em 0.95em 0;
}
.isso-comment > .isso-avatar > svg {
max-width: 48px;
max-height: 48px;
width: 100%;
height: 100%;
border: 1px solid rgba(0, 0, 0, 0.2);
border-radius: 3px;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-comment > .isso-text-wrapper {
display: block;
padding: 0.95em;
}
.isso-comment .isso-follow-up {
padding-left: calc(7% + 20px);
}
.isso-comment > .isso-text-wrapper > .isso-comment-header, .isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.95em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header {
font-size: 0.85em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer {
padding: 0 6px;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent {
color: gray !important;
font-weight: normal;
text-shadow: none !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent:hover {
color: #606060 !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note {
float: right;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-author {
font-weight: bold;
color: #555;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-page-author-suffix {
font-weight: bold;
color: #2c2c2c;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p:last-child {
margin-bottom: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text h1,
.isso-comment > .isso-text-wrapper > .isso-text h2,
.isso-comment > .isso-text-wrapper > .isso-text h3,
.isso-comment > .isso-text-wrapper > .isso-text h4,
.isso-comment > .isso-text-wrapper > .isso-text h5,
.isso-comment > .isso-text-wrapper > .isso-text h6 {
font-size: 130%;
font-weight: bold;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
width: 100%;
border: 1px solid #f0f0f0;
border-radius: 2px;
box-shadow: 0 0 2px #888;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.80em;
color: gray !important;
clear: left;
}
.isso-feedlink,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a {
font-weight: bold;
text-decoration: none;
}
.isso-feedlink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a:hover {
color: #111111 !important;
text-shadow: #aaaaaa 0 0 1px !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a {
position: relative;
top: .2em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a + a {
padding-left: 1em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-votes {
color: gray;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-upvote svg,
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-downvote svg {
position: relative;
top: .2em;
}
.isso-comment .isso-postbox {
margin-top: 0.8em;
}
.isso-comment.isso-no-votes > * > .isso-comment-footer span.isso-votes {
display: none;
}
/* "target" means the comment that's being linked to, for example:
* https://example.com/blog/example/#isso-15
*/
.isso-target {
animation: isso-target-fade 5s ease-out;
}
@keyframes isso-target-fade {
0% { background-color: #eee5a1; }
/* This color should be changed when used on a dark background,
* maybe #3f3c1c for example
*/
}
.isso-postbox {
max-width: 68em;
margin: 0 auto 2em;
clear: right;
}
.isso-postbox > .isso-form-wrapper {
display: block;
padding: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section,
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
}
.isso-postbox > .isso-form-wrapper .isso-textarea,
.isso-postbox > .isso-form-wrapper .isso-preview {
margin: 0 0 .3em;
padding: .4em .8em;
border-radius: 3px;
background-color: #fff;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper input[type=checkbox] {
vertical-align: middle;
position: relative;
bottom: 1px;
margin-left: 0;
}
.isso-postbox > .isso-form-wrapper .isso-notification-section {
font-size: 0.90em;
padding-top: .3em;
}
#isso-thread .isso-textarea:focus,
#isso-thread input:focus {
border-color: rgba(0, 0, 0, 0.8);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: inline-block;
position: relative;
max-width: 25%;
margin: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
padding: .3em 10px;
max-width: 100%;
border-radius: 3px;
background-color: #fff;
line-height: 1.4em;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper label {
display: inline-block;
line-height: 1.4em;
height: 1.4em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
float: right;
margin: 1.4em 0 0 5px;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input {
padding: calc(.3em - 1px);
border-radius: 2px;
border: 1px solid #CCC;
background-color: #DDD;
cursor: pointer;
outline: 0;
line-height: 1.4em;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:hover {
background-color: #CCC;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:active {
background-color: #BBB;
}
.isso-postbox > .isso-form-wrapper .isso-preview,
.isso-postbox > .isso-form-wrapper input[name="edit"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="preview"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-textarea {
display: none;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-preview {
display: block;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="edit"] {
display: inline;
}
.isso-postbox > .isso-form-wrapper .isso-preview {
background-color: #f8f8f8;
background: repeating-linear-gradient(
-45deg,
#f8f8f8,
#f8f8f8 10px,
#fff 10px,
#fff 20px
);
}
.isso-postbox > .isso-form-wrapper > .isso-notification-section {
display: none;
padding-bottom: 10px;
}
@media screen and (max-width:600px) {
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: block;
max-width: 100%;
margin: 0 0 .3em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
width: 100%;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
margin-top: 0;
}
}
| #isso-thread * {
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
}
#isso-thread .isso-comment-header a {
text-decoration: none;
}
#isso-thread {
padding: 0;
margin: 0;
}
#isso-thread > h4 {
color: #555;
font-weight: bold;
}
#isso-thread > .isso-feedlink {
float: right;
padding-left: 1em;
}
#isso-thread > .isso-feedlink > a {
font-size: 0.8em;
vertical-align: bottom;
}
#isso-thread .isso-textarea {
outline: 0;
width: 100%;
resize: none;
}
#isso-root .isso-comment {
max-width: 68em;
margin: 0 auto;
}
#isso-root .isso-preview .isso-comment {
padding-top: 0;
margin: 0;
}
#isso-root .isso-comment:not(:first-of-type),
.isso-follow-up .isso-comment {
border-top: 1px solid rgba(0, 0, 0, 0.1);
margin-bottom: 0.5em;
}
.isso-comment > .isso-avatar {
display: block;
float: left;
margin: 0.95em 0.95em 0;
}
.isso-comment > .isso-avatar > svg {
max-width: 48px;
max-height: 48px;
width: 100%;
height: 100%;
border: 1px solid rgba(0, 0, 0, 0.2);
border-radius: 3px;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-comment > .isso-text-wrapper {
display: block;
padding: 0.95em;
}
.isso-comment .isso-follow-up {
padding-left: calc(7% + 20px);
}
.isso-comment > .isso-text-wrapper > .isso-comment-header, .isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.95em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header {
font-size: 0.85em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer {
padding: 0 6px;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent {
color: gray !important;
font-weight: normal;
text-shadow: none !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-spacer:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-permalink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-header a.isso-parent:hover {
color: #606060 !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-note {
float: right;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-author {
font-weight: bold;
color: #555;
}
.isso-comment > .isso-text-wrapper > .isso-comment-header .isso-page-author-suffix {
font-weight: bold;
color: #2c2c2c;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p {
margin-top: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text p:last-child {
margin-bottom: 0.2em;
}
.isso-comment > .isso-text-wrapper > .isso-text h1,
.isso-comment > .isso-text-wrapper > .isso-text h2,
.isso-comment > .isso-text-wrapper > .isso-text h3,
.isso-comment > .isso-text-wrapper > .isso-text h4,
.isso-comment > .isso-text-wrapper > .isso-text h5,
.isso-comment > .isso-text-wrapper > .isso-text h6 {
font-size: 130%;
font-weight: bold;
}
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-textarea,
.isso-comment > .isso-text-wrapper > .isso-textarea-wrapper .isso-preview {
width: 100%;
border: 1px solid #f0f0f0;
border-radius: 2px;
box-shadow: 0 0 2px #888;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer {
font-size: 0.80em;
color: gray !important;
clear: left;
}
.isso-feedlink,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a {
font-weight: bold;
text-decoration: none;
}
.isso-feedlink:hover,
.isso-comment > .isso-text-wrapper > .isso-comment-footer a:hover {
color: #111111 !important;
text-shadow: #aaaaaa 0 0 1px !important;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a {
position: relative;
top: .2em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer > a + a {
padding-left: 1em;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-votes {
color: gray;
}
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-upvote svg,
.isso-comment > .isso-text-wrapper > .isso-comment-footer .isso-downvote svg {
position: relative;
top: .2em;
}
.isso-comment .isso-postbox {
margin-top: 0.8em;
}
.isso-comment.isso-no-votes > * > .isso-comment-footer span.isso-votes {
display: none;
}
/* "target" means the comment that's being linked to, for example:
* https://example.com/blog/example/#isso-15
*/
.isso-target {
animation: isso-target-fade 5s ease-out;
}
@keyframes isso-target-fade {
0% { background-color: #eee5a1; }
/* This color should be changed when used on a dark background,
* maybe #3f3c1c for example
*/
}
.isso-postbox {
max-width: 68em;
margin: 0 auto 2em;
clear: right;
}
.isso-postbox > .isso-form-wrapper {
display: block;
padding: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section,
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
}
.isso-postbox > .isso-form-wrapper .isso-textarea,
.isso-postbox > .isso-form-wrapper .isso-preview {
margin: 0 0 .3em;
padding: .4em .8em;
border-radius: 3px;
background-color: #fff;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper input[type=checkbox] {
vertical-align: middle;
position: relative;
bottom: 1px;
margin-left: 0;
}
.isso-postbox > .isso-form-wrapper .isso-notification-section {
font-size: 0.90em;
padding-top: .3em;
}
#isso-thread .isso-textarea:focus,
#isso-thread input:focus {
border-color: rgba(0, 0, 0, 0.8);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: inline-block;
position: relative;
max-width: 25%;
margin: 0;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
padding: .3em 10px;
max-width: 100%;
border-radius: 3px;
background-color: #fff;
line-height: 1.4em;
border: 1px solid rgba(0, 0, 0, 0.2);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper label {
display: inline-block;
line-height: 1.4em;
height: 1.4em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
display: block;
float: right;
margin: 1.4em 0 0 5px;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input {
padding: calc(.3em - 1px);
border-radius: 2px;
border: 1px solid #CCC;
background-color: #DDD;
cursor: pointer;
outline: 0;
line-height: 1.4em;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1);
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:hover {
background-color: #CCC;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action > input:active {
background-color: #BBB;
}
.isso-postbox > .isso-form-wrapper .isso-preview,
.isso-postbox > .isso-form-wrapper input[name="edit"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="preview"],
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-textarea {
display: none;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper .isso-preview {
display: block;
}
.isso-postbox.isso-preview-mode > .isso-form-wrapper input[name="edit"] {
display: inline;
}
.isso-postbox > .isso-form-wrapper .isso-preview {
background-color: #f8f8f8;
background: repeating-linear-gradient(
-45deg,
#f8f8f8,
#f8f8f8 10px,
#fff 10px,
#fff 20px
);
}
.isso-postbox > .isso-form-wrapper > .isso-notification-section {
display: none;
padding-bottom: 10px;
}
@media screen and (max-width:600px) {
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper {
display: block;
max-width: 100%;
margin: 0 0 .3em;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-input-wrapper input {
width: 100%;
}
.isso-postbox > .isso-form-wrapper > .isso-auth-section .isso-post-action {
margin-top: 0;
}
}
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | I'll add it back, but I can't really tell the difference. But you're right, it can be looked at later in a CSS cleanup. | BBaoVanC | 7 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/app/isso.js | /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var editorify = function(el) {
el = $.htmlify(el);
el.setAttribute("contentEditable", true);
el.on("focus", function() {
if (el.classList.contains("isso-placeholder")) {
el.innerHTML = "";
el.classList.remove("isso-placeholder");
}
});
el.on("blur", function() {
if (el.textContent.length === 0) {
el.textContent = i18n.translate("postbox-text");
el.classList.add("isso-placeholder");
}
});
return el;
}
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if (utils.text($(".isso-textarea", this).innerHTML).length < 3 ||
$(".isso-textarea", this).classList.contains("isso-placeholder"))
{
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview(utils.text($(".isso-textarea", el).innerHTML)).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", edit);
$(".isso-preview", el).on("click", edit);
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: utils.text($(".isso-textarea", el).innerHTML),
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).innerHTML = "";
$(".isso-textarea", el).blur();
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
editorify($(".isso-textarea", el));
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = editorify($.new("div.isso-textarea"));
textarea.innerHTML = utils.detext(rv.text);
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (utils.text(textarea.innerHTML).length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": utils.text(textarea.innerHTML)}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
editorify: editorify,
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if ($(".isso-textarea", this).value.length < 3) {
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview($(".isso-textarea", el).value).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
$(".isso-preview", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: $(".isso-textarea", el).value,
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).value = "";
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = $.new("textarea.isso-textarea");
textarea.setAttribute("rows", 5);
textarea.setAttribute("minlength", 3);
textarea.setAttribute("maxlength", 65535);
textarea.value = rv.text;
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (textarea.value.length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": textarea.value}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | Was the focus loss (`Node.blur()` taken out intentionally? I'm guessing because we don't need to re-add the placeholder text on `blur` event, no sense in triggering it? | ix5 | 8 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/app/isso.js | /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var editorify = function(el) {
el = $.htmlify(el);
el.setAttribute("contentEditable", true);
el.on("focus", function() {
if (el.classList.contains("isso-placeholder")) {
el.innerHTML = "";
el.classList.remove("isso-placeholder");
}
});
el.on("blur", function() {
if (el.textContent.length === 0) {
el.textContent = i18n.translate("postbox-text");
el.classList.add("isso-placeholder");
}
});
return el;
}
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if (utils.text($(".isso-textarea", this).innerHTML).length < 3 ||
$(".isso-textarea", this).classList.contains("isso-placeholder"))
{
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview(utils.text($(".isso-textarea", el).innerHTML)).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", edit);
$(".isso-preview", el).on("click", edit);
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: utils.text($(".isso-textarea", el).innerHTML),
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).innerHTML = "";
$(".isso-textarea", el).blur();
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
editorify($(".isso-textarea", el));
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = editorify($.new("div.isso-textarea"));
textarea.innerHTML = utils.detext(rv.text);
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (utils.text(textarea.innerHTML).length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": utils.text(textarea.innerHTML)}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
editorify: editorify,
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if ($(".isso-textarea", this).value.length < 3) {
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview($(".isso-textarea", el).value).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
$(".isso-preview", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: $(".isso-textarea", el).value,
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).value = "";
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = $.new("textarea.isso-textarea");
textarea.setAttribute("rows", 5);
textarea.setAttribute("minlength", 3);
textarea.setAttribute("maxlength", 65535);
textarea.value = rv.text;
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (textarea.value.length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": textarea.value}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | this is not "on initial resize" but rather when returning from preview to editing again, no? | ix5 | 9 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/app/isso.js | /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var editorify = function(el) {
el = $.htmlify(el);
el.setAttribute("contentEditable", true);
el.on("focus", function() {
if (el.classList.contains("isso-placeholder")) {
el.innerHTML = "";
el.classList.remove("isso-placeholder");
}
});
el.on("blur", function() {
if (el.textContent.length === 0) {
el.textContent = i18n.translate("postbox-text");
el.classList.add("isso-placeholder");
}
});
return el;
}
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if (utils.text($(".isso-textarea", this).innerHTML).length < 3 ||
$(".isso-textarea", this).classList.contains("isso-placeholder"))
{
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview(utils.text($(".isso-textarea", el).innerHTML)).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", edit);
$(".isso-preview", el).on("click", edit);
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: utils.text($(".isso-textarea", el).innerHTML),
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).innerHTML = "";
$(".isso-textarea", el).blur();
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
editorify($(".isso-textarea", el));
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = editorify($.new("div.isso-textarea"));
textarea.innerHTML = utils.detext(rv.text);
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (utils.text(textarea.innerHTML).length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": utils.text(textarea.innerHTML)}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
editorify: editorify,
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if ($(".isso-textarea", this).value.length < 3) {
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview($(".isso-textarea", el).value).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
$(".isso-preview", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: $(".isso-textarea", el).value,
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).value = "";
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = $.new("textarea.isso-textarea");
textarea.setAttribute("rows", 5);
textarea.setAttribute("minlength", 3);
textarea.setAttribute("maxlength", 65535);
textarea.value = rv.text;
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (textarea.value.length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": textarea.value}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | Yeah, I'm using the `placeholder` attribute on the textarea. And I feel like it would already trigger the event anyways, since clicking the submit button would make the textarea lose focus. Regardless I can't tell any difference when adding/removing that line. | BBaoVanC | 10 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/app/isso.js | /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var editorify = function(el) {
el = $.htmlify(el);
el.setAttribute("contentEditable", true);
el.on("focus", function() {
if (el.classList.contains("isso-placeholder")) {
el.innerHTML = "";
el.classList.remove("isso-placeholder");
}
});
el.on("blur", function() {
if (el.textContent.length === 0) {
el.textContent = i18n.translate("postbox-text");
el.classList.add("isso-placeholder");
}
});
return el;
}
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if (utils.text($(".isso-textarea", this).innerHTML).length < 3 ||
$(".isso-textarea", this).classList.contains("isso-placeholder"))
{
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview(utils.text($(".isso-textarea", el).innerHTML)).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", edit);
$(".isso-preview", el).on("click", edit);
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: utils.text($(".isso-textarea", el).innerHTML),
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).innerHTML = "";
$(".isso-textarea", el).blur();
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
editorify($(".isso-textarea", el));
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = editorify($.new("div.isso-textarea"));
textarea.innerHTML = utils.detext(rv.text);
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (utils.text(textarea.innerHTML).length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": utils.text(textarea.innerHTML)}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
editorify: editorify,
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if ($(".isso-textarea", this).value.length < 3) {
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview($(".isso-textarea", el).value).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
$(".isso-preview", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: $(".isso-textarea", el).value,
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).value = "";
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = $.new("textarea.isso-textarea");
textarea.setAttribute("rows", 5);
textarea.setAttribute("minlength", 3);
textarea.setAttribute("maxlength", 65535);
textarea.value = rv.text;
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (textarea.value.length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": textarea.value}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | This function triggers once when you press the Edit link on an existing comment. I originally added this line when I was using `autosize` to resize the textbox, and when it did that, then the first resize of the textarea would make it scroll off screen if the comment was longer than a few lines.
It actually looks like this serves a different function now. When editing without autosize, the edit box is still halfway off-screen when pressing the "Edit button". ~~I'll update the comment, let me know what you think about this addition.~~
**Edit:** I'm just going to remove this since it's unrelated to what this pull request aims to fix. | BBaoVanC | 11 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/app/isso.js | /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var editorify = function(el) {
el = $.htmlify(el);
el.setAttribute("contentEditable", true);
el.on("focus", function() {
if (el.classList.contains("isso-placeholder")) {
el.innerHTML = "";
el.classList.remove("isso-placeholder");
}
});
el.on("blur", function() {
if (el.textContent.length === 0) {
el.textContent = i18n.translate("postbox-text");
el.classList.add("isso-placeholder");
}
});
return el;
}
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if (utils.text($(".isso-textarea", this).innerHTML).length < 3 ||
$(".isso-textarea", this).classList.contains("isso-placeholder"))
{
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview(utils.text($(".isso-textarea", el).innerHTML)).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", edit);
$(".isso-preview", el).on("click", edit);
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: utils.text($(".isso-textarea", el).innerHTML),
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).innerHTML = "";
$(".isso-textarea", el).blur();
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
editorify($(".isso-textarea", el));
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = editorify($.new("div.isso-textarea"));
textarea.innerHTML = utils.detext(rv.text);
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (utils.text(textarea.innerHTML).length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": utils.text(textarea.innerHTML)}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
editorify: editorify,
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if ($(".isso-textarea", this).value.length < 3) {
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview($(".isso-textarea", el).value).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
$(".isso-preview", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: $(".isso-textarea", el).value,
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).value = "";
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = $.new("textarea.isso-textarea");
textarea.setAttribute("rows", 5);
textarea.setAttribute("minlength", 3);
textarea.setAttribute("maxlength", 65535);
textarea.value = rv.text;
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (textarea.value.length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": textarea.value}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | rows still 10 here, please set to 5 | ix5 | 12 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/app/isso.js | /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var editorify = function(el) {
el = $.htmlify(el);
el.setAttribute("contentEditable", true);
el.on("focus", function() {
if (el.classList.contains("isso-placeholder")) {
el.innerHTML = "";
el.classList.remove("isso-placeholder");
}
});
el.on("blur", function() {
if (el.textContent.length === 0) {
el.textContent = i18n.translate("postbox-text");
el.classList.add("isso-placeholder");
}
});
return el;
}
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if (utils.text($(".isso-textarea", this).innerHTML).length < 3 ||
$(".isso-textarea", this).classList.contains("isso-placeholder"))
{
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview(utils.text($(".isso-textarea", el).innerHTML)).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", edit);
$(".isso-preview", el).on("click", edit);
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: utils.text($(".isso-textarea", el).innerHTML),
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).innerHTML = "";
$(".isso-textarea", el).blur();
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
editorify($(".isso-textarea", el));
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = editorify($.new("div.isso-textarea"));
textarea.innerHTML = utils.detext(rv.text);
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (utils.text(textarea.innerHTML).length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": utils.text(textarea.innerHTML)}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
editorify: editorify,
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| /* Isso – Ich schrei sonst!
*/
var $ = require("app/dom");
var utils = require("app/utils");
var config = require("app/config");
var api = require("app/api");
var template = require("app/template");
var i18n = require("app/i18n");
var identicons = require("app/lib/identicons");
var globals = require("app/globals");
"use strict";
var Postbox = function(parent) {
var localStorage = utils.localStorageImpl,
el = $.htmlify(template.render("postbox", {
"author": JSON.parse(localStorage.getItem("isso-author")),
"email": JSON.parse(localStorage.getItem("isso-email")),
"website": JSON.parse(localStorage.getItem("isso-website")),
"preview": ''
}));
// callback on success (e.g. to toggle the reply button)
el.onsuccess = function() {};
el.validate = function() {
if ($(".isso-textarea", this).value.length < 3) {
$(".isso-textarea", this).focus();
return false;
}
if (config["require-email"] &&
$("[name='email']", this).value.length <= 0)
{
$("[name='email']", this).focus();
return false;
}
if (config["require-author"] &&
$("[name='author']", this).value.length <= 0)
{
$("[name='author']", this).focus();
return false;
}
return true;
};
// only display notification checkbox if email is filled in
var email_edit = function() {
if (config["reply-notifications"] && $("[name='email']", el).value.length > 0) {
$(".isso-notification-section", el).show();
} else {
$(".isso-notification-section", el).hide();
}
};
$("[name='email']", el).on("input", email_edit);
email_edit();
// email is not optional if this config parameter is set
if (config["require-email"]) {
$("[for='isso-postbox-email']", el).textContent =
$("[for='isso-postbox-email']", el).textContent.replace(/ \(.*\)/, "");
}
// author is not optional if this config parameter is set
if (config["require-author"]) {
$("[for='isso-postbox-author']", el).textContent =
$("[for='isso-postbox-author']", el).textContent.replace(/ \(.*\)/, "");
}
// preview function
$("[name='preview']", el).on("click", function() {
api.preview($(".isso-textarea", el).value).then(
function(html) {
$(".isso-preview .isso-text", el).innerHTML = html;
el.classList.add('isso-preview-mode');
});
});
// edit function
var edit = function() {
$(".isso-preview .isso-text", el).innerHTML = '';
el.classList.remove('isso-preview-mode');
};
$("[name='edit']", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
$(".isso-preview", el).on("click", function() {
edit();
$(".isso-textarea", el).focus();
});
// submit form, initialize optional fields with `null` and reset form.
// If replied to a comment, remove form completely.
$("[type=submit]", el).on("click", function() {
edit();
if (! el.validate()) {
return;
}
var author = $("[name=author]", el).value || null,
email = $("[name=email]", el).value || null,
website = $("[name=website]", el).value || null;
localStorage.setItem("isso-author", JSON.stringify(author));
localStorage.setItem("isso-email", JSON.stringify(email));
localStorage.setItem("isso-website", JSON.stringify(website));
api.create($("#isso-thread").getAttribute("data-isso-id"), {
author: author, email: email, website: website,
text: $(".isso-textarea", el).value,
parent: parent || null,
title: $("#isso-thread").getAttribute("data-title") || null,
notification: $("[name=notification]", el).checked() ? 1 : 0,
}).then(function(comment) {
$(".isso-textarea", el).value = "";
insert(comment, true);
if (parent !== null) {
el.onsuccess();
}
});
});
return el;
};
var insert_loader = function(comment, lastcreated) {
var entrypoint;
if (comment.id === null) {
entrypoint = $("#isso-root");
comment.name = 'null';
} else {
entrypoint = $("#isso-" + comment.id + " > .isso-follow-up");
comment.name = comment.id;
}
var el = $.htmlify(template.render("comment-loader", {"comment": comment}));
entrypoint.append(el);
$("a.isso-load-hidden", el).on("click", function() {
el.remove();
api.fetch($("#isso-thread").getAttribute("data-isso-id"),
config["reveal-on-click"], config["max-comments-nested"],
comment.id,
lastcreated).then(
function(rv) {
if (rv.total_replies === 0) {
return;
}
var lastcreated = 0;
rv.replies.forEach(function(commentObject) {
insert(commentObject, false);
if(commentObject.created > lastcreated) {
lastcreated = commentObject.created;
}
});
if(rv.hidden_replies > 0) {
insert_loader(rv, lastcreated);
}
},
function(err) {
console.log(err);
});
});
};
var insert = function(comment, scrollIntoView) {
var el = $.htmlify(template.render("comment", {"comment": comment}));
// update datetime every 60 seconds
var refresh = function() {
$(".isso-permalink > time", el).textContent = i18n.ago(
globals.offset.localTime(), new Date(parseInt(comment.created, 10) * 1000));
setTimeout(refresh, 60*1000);
};
// run once to activate
refresh();
if (config["avatar"]) {
$(".isso-avatar > svg", el).replace(identicons.generate(comment.hash, 4, 48, config));
}
var entrypoint;
if (comment.parent === null) {
entrypoint = $("#isso-root");
} else {
entrypoint = $("#isso-" + comment.parent + " > .isso-follow-up");
}
entrypoint.append(el);
if (scrollIntoView) {
el.scrollIntoView();
}
var footer = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-footer"),
header = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-comment-header"),
text = $("#isso-" + comment.id + " > .isso-text-wrapper > .isso-text");
var form = null; // XXX: probably a good place for a closure
$("a.isso-reply", footer).toggle("click",
function(toggler) {
form = footer.insertAfter(new Postbox(comment.parent === null ? comment.id : comment.parent));
form.onsuccess = function() { toggler.next(); };
$(".isso-textarea", form).focus();
$("a.isso-reply", footer).textContent = i18n.translate("comment-close");
},
function() {
form.remove();
$("a.isso-reply", footer).textContent = i18n.translate("comment-reply");
}
);
if (config.vote) {
var voteLevels = config['vote-levels'];
if (typeof voteLevels === 'string') {
// Eg. -5,5,15
voteLevels = voteLevels.split(',');
}
// update vote counter
var votes = function (value) {
var span = $("span.isso-votes", footer);
if (span === null) {
footer.prepend($.new("span.isso-votes", value));
} else {
span.textContent = value;
}
if (value) {
el.classList.remove('isso-no-votes');
} else {
el.classList.add('isso-no-votes');
}
if (voteLevels) {
var before = true;
for (var index = 0; index <= voteLevels.length; index++) {
if (before && (index >= voteLevels.length || value < voteLevels[index])) {
el.classList.add('isso-vote-level-' + index);
before = false;
} else {
el.classList.remove('isso-vote-level-' + index);
}
}
}
};
$("a.isso-upvote", footer).on("click", function () {
api.like(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
$("a.isso-downvote", footer).on("click", function () {
api.dislike(comment.id).then(function (rv) {
votes(rv.likes - rv.dislikes);
});
});
votes(comment.likes - comment.dislikes);
}
$("a.isso-edit", footer).toggle("click",
function(toggler) {
var edit = $("a.isso-edit", footer);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
edit.textContent = i18n.translate("comment-save");
edit.insertAfter($.new("a.isso-cancel", i18n.translate("comment-cancel"))).on("click", function() {
toggler.canceled = true;
toggler.next();
});
toggler.canceled = false;
api.view(comment.id, 1).then(function(rv) {
var textarea = $.new("textarea.isso-textarea");
textarea.setAttribute("rows", 5);
textarea.setAttribute("minlength", 3);
textarea.setAttribute("maxlength", 65535);
textarea.value = rv.text;
textarea.focus();
text.classList.remove("isso-text");
text.classList.add("isso-textarea-wrapper");
text.textContent = "";
text.append(textarea);
});
if (avatar !== null) {
avatar.hide();
}
},
function(toggler) {
var textarea = $(".isso-textarea", text);
var avatar = config["avatar"] || config["gravatar"] ? $(".isso-avatar", el, false)[0] : null;
if (! toggler.canceled && textarea !== null) {
if (textarea.value.length < 3) {
textarea.focus();
toggler.wait();
return;
} else {
api.modify(comment.id, {"text": textarea.value}).then(function(rv) {
text.innerHTML = rv.text;
comment.text = rv.text;
});
}
} else {
text.innerHTML = comment.text;
}
text.classList.remove("isso-textarea-wrapper");
text.classList.add("isso-text");
if (avatar !== null) {
avatar.show();
}
$("a.isso-cancel", footer).remove();
$("a.isso-edit", footer).textContent = i18n.translate("comment-edit");
}
);
$("a.isso-delete", footer).toggle("click",
function(toggler) {
var del = $("a.isso-delete", footer);
var state = ! toggler.state;
del.textContent = i18n.translate("comment-confirm");
del.on("mouseout", function() {
del.textContent = i18n.translate("comment-delete");
toggler.state = state;
del.onmouseout = null;
});
},
function() {
var del = $("a.isso-delete", footer);
api.remove(comment.id).then(function(rv) {
if (rv) {
el.remove();
} else {
$("span.isso-note", header).textContent = i18n.translate("comment-deleted");
text.innerHTML = "<p> </p>";
$("a.isso-edit", footer).remove();
$("a.isso-delete", footer).remove();
}
del.textContent = i18n.translate("comment-delete");
});
}
);
// remove edit and delete buttons when cookie is gone
var clear = function(button) {
if (! utils.cookie("isso-" + comment.id)) {
if ($(button, footer) !== null) {
$(button, footer).remove();
}
} else {
setTimeout(function() { clear(button); }, 15*1000);
}
};
clear("a.isso-edit");
clear("a.isso-delete");
// show direct reply to own comment when cookie is max aged
var show = function(el) {
if (utils.cookie("isso-" + comment.id)) {
setTimeout(function() { show(el); }, 15*1000);
} else {
footer.append(el);
}
};
if (! config["reply-to-self"] && utils.cookie("isso-" + comment.id)) {
show($("a.isso-reply", footer).detach());
}
if(comment.hasOwnProperty('replies')) {
var lastcreated = 0;
comment.replies.forEach(function(replyObject) {
insert(replyObject, false);
if(replyObject.created > lastcreated) {
lastcreated = replyObject.created;
}
});
if(comment.hidden_replies > 0) {
insert_loader(comment, lastcreated);
}
}
};
module.exports = {
insert: insert,
insert_loader: insert_loader,
Postbox: Postbox,
};
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | Forgot about that, fixed now | BBaoVanC | 13 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/app/templates/postbox.js | var html = function (globals) {
var i18n = globals.i18n;
var conf = globals.conf;
var author = globals.author;
var email = globals.email;
var website = globals.website;
var notify = conf["reply-notifications-default-enabled"] ? " checked" : '';
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<div class='isso-textarea isso-placeholder' contenteditable='true'>" + i18n('postbox-text') + "</div>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<label for='isso-postbox-author'>" + i18n('postbox-author') + "</label>"
+ "<input id='isso-postbox-author' type='text' name='author' placeholder='" + i18n('postbox-author-placeholder') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<label for='isso-postbox-email'>" + i18n('postbox-email') + "</label>"
+ "<input id='isso-postbox-email' type='email' name='email' placeholder='" + i18n('postbox-email-placeholder') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<label for='isso-postbox-website'>" + i18n('postbox-website') + "</label>"
+ "<input id='isso-postbox-website' type='text' name='website' placeholder='" + i18n('postbox-website-placeholder') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox'" + notify + " name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| var html = function (globals) {
var i18n = globals.i18n;
var conf = globals.conf;
var author = globals.author;
var email = globals.email;
var website = globals.website;
var notify = conf["reply-notifications-default-enabled"] ? " checked" : '';
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<textarea class='isso-textarea' rows='5' minlength='3' maxlength='65535' placeholder='" + i18n('postbox-text') + "'></textarea>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<label for='isso-postbox-author'>" + i18n('postbox-author') + "</label>"
+ "<input id='isso-postbox-author' type='text' name='author' placeholder='" + i18n('postbox-author-placeholder') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<label for='isso-postbox-email'>" + i18n('postbox-email') + "</label>"
+ "<input id='isso-postbox-email' type='email' name='email' placeholder='" + i18n('postbox-email-placeholder') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<label for='isso-postbox-website'>" + i18n('postbox-website') + "</label>"
+ "<input id='isso-postbox-website' type='text' name='website' placeholder='" + i18n('postbox-website-placeholder') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox'" + notify + " name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | a) I can understand why you set **rows to 10** here - since we don't have auto-resizing (yet), you want the textarea to be big enough for people to enter medium-length text. But that makes the area quite long; especially on mobile already half a page length is taken up only by the postbox and it's not immediately clear that there will be actual comments when scrolling further down. Let's stick with an initial **5 rows** for now.
b) Why not add a `minlength` as well? let the browser do the validation instead of us having to do that. Should also have the benefit of localized error/hint messages instead of us having to carry translations. | ix5 | 14 |
posativ/isso | 887 | js, templates: Replace `contenteditable` `div` with `textarea` | <!-- Just like NASA going to the moon, it's always good to have a checklist when
creating changes.
The following items are listed to help you create a great Pull Request: -->
## Checklist
- [x] All new and existing **tests are passing**
- [ ] (If adding features:) I have added tests to cover my changes
- [ ] ~~(If docs changes needed:) I have updated the **documentation** accordingly.~~ N/A?
- [x] I have added an entry to `CHANGES.rst` because this is a user-facing change or an important bugfix
- [x] I have written **proper commit message(s)**
<!-- Title ideally under 50 characters (at most 72), good explanations,
affected part of Isso mentioned in title, e.g. "docs: css: Increase font-size on buttons"
Further info: https://github.com/joelparkerhenderson/git-commit-message -->
## What changes does this Pull Request introduce?
- Replace the `contenteditable` `div` with a proper `textarea` - fix indentation bugs
## Why is this necessary?
Fixes #465 | null | 2022-05-26 01:26:57+00:00 | 2022-05-30 17:04:00+00:00 | isso/js/tests/unit/isso.test.js | /**
* @jest-environment jsdom
*/
/* Keep the above exactly as-is!
* https://jestjs.io/docs/configuration#testenvironment-string
* https://jestjs.io/docs/configuration#testenvironmentoptions-object
*/
"use strict";
/*
* Test goals:
* - Test editorify()
* - Test insert()
* - Test insert_loader()
* - Test Postbox()
* Also, untangle Postbox functions from DOM element
*/
test('Editorify text area', () => {
// Set up our document body
document.body.innerHTML =
'<div id=isso-thread></div>' +
// Note: `src` and `data-isso` need to be set,
// else `api` fails to initialize!
'<script src="http://isso.api/js/embed.min.js" data-isso="/"></script>';
let placeholder = 'Type here'
let html = "<div class='isso-textarea isso-placeholder' contenteditable='true'>Type here</div>"
jest.mock('app/i18n', () => ({
translate: jest.fn(key => placeholder),
}));
const isso = require("app/isso");
const $ = require("app/dom");
var textarea = $.htmlify(html);
var isso_thread = $('#isso-thread');
isso_thread.append(textarea);
let area = document.querySelectorAll('.isso-textarea')[0];
isso.editorify(textarea);
expect(textarea.getAttribute('contentEditable')).toBe('true');
// textarea.focus() does not work here,
// Maybe some JSDOM oddities prevent addEventListener()?
area.dispatchEvent(new window.MouseEvent('focus'));
// classList returns {'0': 'class1, '1': 'class2', ...}
expect(textarea.innerHTML).toBe("");
expect(Object.values(textarea.classList)).not.toContain("isso-placeholder");
// textarea.blur() does not work here
area.dispatchEvent(new window.MouseEvent('blur'));
expect(Object.values(textarea.classList)).toContain("isso-placeholder");
expect(textarea.innerHTML).toBe("Type here");
});
| /**
* @jest-environment jsdom
*/
/* Keep the above exactly as-is!
* https://jestjs.io/docs/configuration#testenvironment-string
* https://jestjs.io/docs/configuration#testenvironmentoptions-object
*/
"use strict";
/*
* Test goals:
* - Test editorify()
* - Test insert()
* - Test insert_loader()
* - Test Postbox()
* Also, untangle Postbox functions from DOM element
*/
test.skip('Editorify text area', () => {
// Set up our document body
document.body.innerHTML =
'<div id=isso-thread></div>' +
// Note: `src` and `data-isso` need to be set,
// else `api` fails to initialize!
'<script src="http://isso.api/js/embed.min.js" data-isso="/"></script>';
let placeholder = 'Type here'
let html = "<div class='isso-textarea isso-placeholder' contenteditable='true'>Type here</div>"
jest.mock('app/i18n', () => ({
translate: jest.fn(key => placeholder),
}));
const isso = require("app/isso");
const $ = require("app/dom");
var textarea = $.htmlify(html);
var isso_thread = $('#isso-thread');
isso_thread.append(textarea);
let area = document.querySelectorAll('.isso-textarea')[0];
isso.editorify(textarea);
expect(textarea.getAttribute('contentEditable')).toBe('true');
// textarea.focus() does not work here,
// Maybe some JSDOM oddities prevent addEventListener()?
area.dispatchEvent(new window.MouseEvent('focus'));
// classList returns {'0': 'class1, '1': 'class2', ...}
expect(textarea.innerHTML).toBe("");
expect(Object.values(textarea.classList)).not.toContain("isso-placeholder");
// textarea.blur() does not work here
area.dispatchEvent(new window.MouseEvent('blur'));
expect(Object.values(textarea.classList)).toContain("isso-placeholder");
expect(textarea.innerHTML).toBe("Type here");
});
| BBaoVanC | b2a1c611461b73f3b67c892e04377536b9a2ce4c | 10e5a90df94d8f1b710f5b1d9a22172fb84e8978 | Please instead `test.skip()` this for now. We'll want to test the whole widget at some point in the future again. | ix5 | 15 |
posativ/isso | 867 | README: Include more information, new screenshot | Add more sections, with more links, small install guide, add new screenshot.
This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there.
Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/)
Screenshot used:
 | null | 2022-05-07 19:59:01+00:00 | 2022-05-07 19:59:12+00:00 | README.md | # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
See **[posativ.org/isso](http://posativ.org/isso/)** for more details and
documentation.

## License
MIT, see [LICENSE](LICENSE).
## Development
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
| # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
## Features
- **Comments written in Markdown**
Users can edit or delete own comments (within 15 minutes by default).
Comments in moderation queue are not publicly visible before activation.
- **SQLite backend**
*Because comments are not Big Data.*
- **Disqus & WordPress Import**
You can migrate your Disqus/WordPress comments without any hassle.
- **Configurable JS client**
Embed a single JS file, 65kB (20kB gzipped) and you are done.
See **[posativ.org/isso](http://posativ.org/isso/)** for a **live demo**, more
details and [documentation](https://posativ.org/isso/docs/).
## Screenshot

## Geting started
**Requirements**
- Python 3.6+ (+ devel headers)
- SQLite 3.3.8 or later
- a working C compiler
Install Isso from [PyPi](https://pypi.python.org/pypi/isso/):
```console
pip install isso
```
Then, follow the [Quickstart](https://posativ.org/isso/docs/quickstart/) guide.
If you're stuck, follow the [Install guide](https://posativ.org/isso/docs/install/),
see [Troubleshooting](https://posativ.org/isso/docs/troubleshooting/) and browse
the [the full documentation](https://posativ.org/isso/docs/).
## Contributing
- Pull requests are very much welcome! These might be
[good first issues](https://github.com/posativ/isso/labels/good-first-issue)
- See [Ways to Contribute](https://posativ.org/isso/contribute/)
- [Translate](https://posativ.org/isso/contribute/#translations)
### Development
<!-- TODO also mention "Development & Testing" section once new docs uploaded -->
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
### Help
- Join `#isso` via [Matrix](https://matrix.to/#/#isso:libera.chat) or via IRC on
[Libera.Chat](https://libera.chat/)
- Ask a question on [GitHub Discussions](https://github.com/posativ/isso/discussions).
## License
MIT, see [LICENSE](LICENSE).
| ix5 | 25724bc8fdcd71d57dfd983f45e38f56116e76ed | 05ba6023da7337340208c16e1108318195ceb5d4 | Why do we need devel headers or a C compiler? @ix5 | jelmer | 16 |
posativ/isso | 867 | README: Include more information, new screenshot | Add more sections, with more links, small install guide, add new screenshot.
This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there.
Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/)
Screenshot used:
 | null | 2022-05-07 19:59:01+00:00 | 2022-05-07 19:59:12+00:00 | README.md | # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
See **[posativ.org/isso](http://posativ.org/isso/)** for more details and
documentation.

## License
MIT, see [LICENSE](LICENSE).
## Development
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
| # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
## Features
- **Comments written in Markdown**
Users can edit or delete own comments (within 15 minutes by default).
Comments in moderation queue are not publicly visible before activation.
- **SQLite backend**
*Because comments are not Big Data.*
- **Disqus & WordPress Import**
You can migrate your Disqus/WordPress comments without any hassle.
- **Configurable JS client**
Embed a single JS file, 65kB (20kB gzipped) and you are done.
See **[posativ.org/isso](http://posativ.org/isso/)** for a **live demo**, more
details and [documentation](https://posativ.org/isso/docs/).
## Screenshot

## Geting started
**Requirements**
- Python 3.6+ (+ devel headers)
- SQLite 3.3.8 or later
- a working C compiler
Install Isso from [PyPi](https://pypi.python.org/pypi/isso/):
```console
pip install isso
```
Then, follow the [Quickstart](https://posativ.org/isso/docs/quickstart/) guide.
If you're stuck, follow the [Install guide](https://posativ.org/isso/docs/install/),
see [Troubleshooting](https://posativ.org/isso/docs/troubleshooting/) and browse
the [the full documentation](https://posativ.org/isso/docs/).
## Contributing
- Pull requests are very much welcome! These might be
[good first issues](https://github.com/posativ/isso/labels/good-first-issue)
- See [Ways to Contribute](https://posativ.org/isso/contribute/)
- [Translate](https://posativ.org/isso/contribute/#translations)
### Development
<!-- TODO also mention "Development & Testing" section once new docs uploaded -->
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
### Help
- Join `#isso` via [Matrix](https://matrix.to/#/#isso:libera.chat) or via IRC on
[Libera.Chat](https://libera.chat/)
- Ask a question on [GitHub Discussions](https://github.com/posativ/isso/discussions).
## License
MIT, see [LICENSE](LICENSE).
| ix5 | 25724bc8fdcd71d57dfd983f45e38f56116e76ed | 05ba6023da7337340208c16e1108318195ceb5d4 | This was a copy-paste from the docs. Probably necessary for misaka and CFFI things? | ix5 | 17 |
posativ/isso | 867 | README: Include more information, new screenshot | Add more sections, with more links, small install guide, add new screenshot.
This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there.
Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/)
Screenshot used:
 | null | 2022-05-07 19:59:01+00:00 | 2022-05-07 19:59:12+00:00 | README.md | # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
See **[posativ.org/isso](http://posativ.org/isso/)** for more details and
documentation.

## License
MIT, see [LICENSE](LICENSE).
## Development
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
| # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
## Features
- **Comments written in Markdown**
Users can edit or delete own comments (within 15 minutes by default).
Comments in moderation queue are not publicly visible before activation.
- **SQLite backend**
*Because comments are not Big Data.*
- **Disqus & WordPress Import**
You can migrate your Disqus/WordPress comments without any hassle.
- **Configurable JS client**
Embed a single JS file, 65kB (20kB gzipped) and you are done.
See **[posativ.org/isso](http://posativ.org/isso/)** for a **live demo**, more
details and [documentation](https://posativ.org/isso/docs/).
## Screenshot

## Geting started
**Requirements**
- Python 3.6+ (+ devel headers)
- SQLite 3.3.8 or later
- a working C compiler
Install Isso from [PyPi](https://pypi.python.org/pypi/isso/):
```console
pip install isso
```
Then, follow the [Quickstart](https://posativ.org/isso/docs/quickstart/) guide.
If you're stuck, follow the [Install guide](https://posativ.org/isso/docs/install/),
see [Troubleshooting](https://posativ.org/isso/docs/troubleshooting/) and browse
the [the full documentation](https://posativ.org/isso/docs/).
## Contributing
- Pull requests are very much welcome! These might be
[good first issues](https://github.com/posativ/isso/labels/good-first-issue)
- See [Ways to Contribute](https://posativ.org/isso/contribute/)
- [Translate](https://posativ.org/isso/contribute/#translations)
### Development
<!-- TODO also mention "Development & Testing" section once new docs uploaded -->
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
### Help
- Join `#isso` via [Matrix](https://matrix.to/#/#isso:libera.chat) or via IRC on
[Libera.Chat](https://libera.chat/)
- Ask a question on [GitHub Discussions](https://github.com/posativ/isso/discussions).
## License
MIT, see [LICENSE](LICENSE).
| ix5 | 25724bc8fdcd71d57dfd983f45e38f56116e76ed | 05ba6023da7337340208c16e1108318195ceb5d4 | It looks like you were also the one who added it to the docs in 93d05c46fcb946a5bc2de5808158015c03fce988
I don't think we need C headers for something that's CFFI. | jelmer | 18 |
posativ/isso | 867 | README: Include more information, new screenshot | Add more sections, with more links, small install guide, add new screenshot.
This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there.
Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/)
Screenshot used:
 | null | 2022-05-07 19:59:01+00:00 | 2022-05-07 19:59:12+00:00 | README.md | # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
See **[posativ.org/isso](http://posativ.org/isso/)** for more details and
documentation.

## License
MIT, see [LICENSE](LICENSE).
## Development
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
| # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
## Features
- **Comments written in Markdown**
Users can edit or delete own comments (within 15 minutes by default).
Comments in moderation queue are not publicly visible before activation.
- **SQLite backend**
*Because comments are not Big Data.*
- **Disqus & WordPress Import**
You can migrate your Disqus/WordPress comments without any hassle.
- **Configurable JS client**
Embed a single JS file, 65kB (20kB gzipped) and you are done.
See **[posativ.org/isso](http://posativ.org/isso/)** for a **live demo**, more
details and [documentation](https://posativ.org/isso/docs/).
## Screenshot

## Geting started
**Requirements**
- Python 3.6+ (+ devel headers)
- SQLite 3.3.8 or later
- a working C compiler
Install Isso from [PyPi](https://pypi.python.org/pypi/isso/):
```console
pip install isso
```
Then, follow the [Quickstart](https://posativ.org/isso/docs/quickstart/) guide.
If you're stuck, follow the [Install guide](https://posativ.org/isso/docs/install/),
see [Troubleshooting](https://posativ.org/isso/docs/troubleshooting/) and browse
the [the full documentation](https://posativ.org/isso/docs/).
## Contributing
- Pull requests are very much welcome! These might be
[good first issues](https://github.com/posativ/isso/labels/good-first-issue)
- See [Ways to Contribute](https://posativ.org/isso/contribute/)
- [Translate](https://posativ.org/isso/contribute/#translations)
### Development
<!-- TODO also mention "Development & Testing" section once new docs uploaded -->
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
### Help
- Join `#isso` via [Matrix](https://matrix.to/#/#isso:libera.chat) or via IRC on
[Libera.Chat](https://libera.chat/)
- Ask a question on [GitHub Discussions](https://github.com/posativ/isso/discussions).
## License
MIT, see [LICENSE](LICENSE).
| ix5 | 25724bc8fdcd71d57dfd983f45e38f56116e76ed | 05ba6023da7337340208c16e1108318195ceb5d4 | SQLite3 is bundled with python, isn't it? You don't really need the command-line tool. | jelmer | 19 |
posativ/isso | 867 | README: Include more information, new screenshot | Add more sections, with more links, small install guide, add new screenshot.
This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there.
Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/)
Screenshot used:
 | null | 2022-05-07 19:59:01+00:00 | 2022-05-07 19:59:12+00:00 | README.md | # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
See **[posativ.org/isso](http://posativ.org/isso/)** for more details and
documentation.

## License
MIT, see [LICENSE](LICENSE).
## Development
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
| # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
## Features
- **Comments written in Markdown**
Users can edit or delete own comments (within 15 minutes by default).
Comments in moderation queue are not publicly visible before activation.
- **SQLite backend**
*Because comments are not Big Data.*
- **Disqus & WordPress Import**
You can migrate your Disqus/WordPress comments without any hassle.
- **Configurable JS client**
Embed a single JS file, 65kB (20kB gzipped) and you are done.
See **[posativ.org/isso](http://posativ.org/isso/)** for a **live demo**, more
details and [documentation](https://posativ.org/isso/docs/).
## Screenshot

## Geting started
**Requirements**
- Python 3.6+ (+ devel headers)
- SQLite 3.3.8 or later
- a working C compiler
Install Isso from [PyPi](https://pypi.python.org/pypi/isso/):
```console
pip install isso
```
Then, follow the [Quickstart](https://posativ.org/isso/docs/quickstart/) guide.
If you're stuck, follow the [Install guide](https://posativ.org/isso/docs/install/),
see [Troubleshooting](https://posativ.org/isso/docs/troubleshooting/) and browse
the [the full documentation](https://posativ.org/isso/docs/).
## Contributing
- Pull requests are very much welcome! These might be
[good first issues](https://github.com/posativ/isso/labels/good-first-issue)
- See [Ways to Contribute](https://posativ.org/isso/contribute/)
- [Translate](https://posativ.org/isso/contribute/#translations)
### Development
<!-- TODO also mention "Development & Testing" section once new docs uploaded -->
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
### Help
- Join `#isso` via [Matrix](https://matrix.to/#/#isso:libera.chat) or via IRC on
[Libera.Chat](https://libera.chat/)
- Ask a question on [GitHub Discussions](https://github.com/posativ/isso/discussions).
## License
MIT, see [LICENSE](LICENSE).
| ix5 | 25724bc8fdcd71d57dfd983f45e38f56116e76ed | 05ba6023da7337340208c16e1108318195ceb5d4 | Devel headers were added way back in [d1a0b3f6](https://github.com/posativ/isso/commit/d1a0b3f6f9d8904b1772d82733608e7fa98105de#diff-9fac36599c82a968157f5bf7ece3fc8c3176fa46bd69a1fc6bd295178b850c1a), I just happen to show up in the blame logs because I bumped versions a couple of times and moved files ;)
As for the validity, maybe we ought to collect empirical data by removing devel headers from our respective systems and seeing whether a full installation in a clean virtualenv (including misaka CFFI compilation) still works. | ix5 | 20 |
posativ/isso | 867 | README: Include more information, new screenshot | Add more sections, with more links, small install guide, add new screenshot.
This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there.
Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/)
Screenshot used:
 | null | 2022-05-07 19:59:01+00:00 | 2022-05-07 19:59:12+00:00 | README.md | # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
See **[posativ.org/isso](http://posativ.org/isso/)** for more details and
documentation.

## License
MIT, see [LICENSE](LICENSE).
## Development
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
| # Isso – a commenting server similar to Disqus
Isso – *Ich schrei sonst* – is a lightweight commenting server written in
Python and JavaScript. It aims to be a drop-in replacement for
[Disqus](http://disqus.com).
## Features
- **Comments written in Markdown**
Users can edit or delete own comments (within 15 minutes by default).
Comments in moderation queue are not publicly visible before activation.
- **SQLite backend**
*Because comments are not Big Data.*
- **Disqus & WordPress Import**
You can migrate your Disqus/WordPress comments without any hassle.
- **Configurable JS client**
Embed a single JS file, 65kB (20kB gzipped) and you are done.
See **[posativ.org/isso](http://posativ.org/isso/)** for a **live demo**, more
details and [documentation](https://posativ.org/isso/docs/).
## Screenshot

## Geting started
**Requirements**
- Python 3.6+ (+ devel headers)
- SQLite 3.3.8 or later
- a working C compiler
Install Isso from [PyPi](https://pypi.python.org/pypi/isso/):
```console
pip install isso
```
Then, follow the [Quickstart](https://posativ.org/isso/docs/quickstart/) guide.
If you're stuck, follow the [Install guide](https://posativ.org/isso/docs/install/),
see [Troubleshooting](https://posativ.org/isso/docs/troubleshooting/) and browse
the [the full documentation](https://posativ.org/isso/docs/).
## Contributing
- Pull requests are very much welcome! These might be
[good first issues](https://github.com/posativ/isso/labels/good-first-issue)
- See [Ways to Contribute](https://posativ.org/isso/contribute/)
- [Translate](https://posativ.org/isso/contribute/#translations)
### Development
<!-- TODO also mention "Development & Testing" section once new docs uploaded -->
Refer to the docs for
[Installing from Source](https://posativ.org/isso/docs/install/#install-from-source).
### Help
- Join `#isso` via [Matrix](https://matrix.to/#/#isso:libera.chat) or via IRC on
[Libera.Chat](https://libera.chat/)
- Ask a question on [GitHub Discussions](https://github.com/posativ/isso/discussions).
## License
MIT, see [LICENSE](LICENSE).
| ix5 | 25724bc8fdcd71d57dfd983f45e38f56116e76ed | 05ba6023da7337340208c16e1108318195ceb5d4 | Same as for the devel headers, let's verify and then I'm not at all opposed to removing those docs lines. | ix5 | 21 |
posativ/isso | 846 | [client] js: Support enabling reply notifications checkbox by default | Fixes #837
Configure by setting `data-isso-reply-notifications-default-enabled` to `"true"` in the client settings. If set to true, then the checkbox for enabling reply notifications will be checked by default:
 | null | 2022-04-25 22:41:08+00:00 | 2022-05-05 21:48:36+00:00 | isso/js/app/templates/postbox.js | var html = function (globals) {
var i18n = globals.i18n;
var author = globals.author;
var email = globals.email;
var website = globals.website;
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<div class='isso-textarea isso-placeholder' contenteditable='true'>" + i18n('postbox-text') + "</div>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='author' placeholder='" + i18n('postbox-author') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='email' name='email' placeholder='" + i18n('postbox-email') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='website' placeholder='" + i18n('postbox-website') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox' name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| var html = function (globals) {
var i18n = globals.i18n;
var conf = globals.conf;
var author = globals.author;
var email = globals.email;
var website = globals.website;
var notify = conf["reply-notifications-default-enabled"] ? " checked" : '';
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<div class='isso-textarea isso-placeholder' contenteditable='true'>" + i18n('postbox-text') + "</div>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='author' placeholder='" + i18n('postbox-author') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='email' name='email' placeholder='" + i18n('postbox-email') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='website' placeholder='" + i18n('postbox-website') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox'" + notify + " name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| BBaoVanC | 0b33a29fe58ccc128f460df26f9e489625ecb235 | 795aed68ea5c39ccb4d96b3a5e1a660d2ea4174b | Maybe someone has a nicer suggestion than this ternary. But not a big deal. | ix5 | 22 |
posativ/isso | 846 | [client] js: Support enabling reply notifications checkbox by default | Fixes #837
Configure by setting `data-isso-reply-notifications-default-enabled` to `"true"` in the client settings. If set to true, then the checkbox for enabling reply notifications will be checked by default:
 | null | 2022-04-25 22:41:08+00:00 | 2022-05-05 21:48:36+00:00 | isso/js/app/templates/postbox.js | var html = function (globals) {
var i18n = globals.i18n;
var author = globals.author;
var email = globals.email;
var website = globals.website;
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<div class='isso-textarea isso-placeholder' contenteditable='true'>" + i18n('postbox-text') + "</div>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='author' placeholder='" + i18n('postbox-author') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='email' name='email' placeholder='" + i18n('postbox-email') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='website' placeholder='" + i18n('postbox-website') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox' name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| var html = function (globals) {
var i18n = globals.i18n;
var conf = globals.conf;
var author = globals.author;
var email = globals.email;
var website = globals.website;
var notify = conf["reply-notifications-default-enabled"] ? " checked" : '';
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<div class='isso-textarea isso-placeholder' contenteditable='true'>" + i18n('postbox-text') + "</div>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='author' placeholder='" + i18n('postbox-author') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='email' name='email' placeholder='" + i18n('postbox-email') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='website' placeholder='" + i18n('postbox-website') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox'" + notify + " name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| BBaoVanC | 0b33a29fe58ccc128f460df26f9e489625ecb235 | 795aed68ea5c39ccb4d96b3a5e1a660d2ea4174b | Would it be good to change it to be a boolean (just `conf["reply-notifications-default-enabled"]`), and then put the ternary inside with the other template code? | BBaoVanC | 23 |
posativ/isso | 846 | [client] js: Support enabling reply notifications checkbox by default | Fixes #837
Configure by setting `data-isso-reply-notifications-default-enabled` to `"true"` in the client settings. If set to true, then the checkbox for enabling reply notifications will be checked by default:
 | null | 2022-04-25 22:41:08+00:00 | 2022-05-05 21:48:36+00:00 | isso/js/app/templates/postbox.js | var html = function (globals) {
var i18n = globals.i18n;
var author = globals.author;
var email = globals.email;
var website = globals.website;
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<div class='isso-textarea isso-placeholder' contenteditable='true'>" + i18n('postbox-text') + "</div>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='author' placeholder='" + i18n('postbox-author') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='email' name='email' placeholder='" + i18n('postbox-email') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='website' placeholder='" + i18n('postbox-website') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox' name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| var html = function (globals) {
var i18n = globals.i18n;
var conf = globals.conf;
var author = globals.author;
var email = globals.email;
var website = globals.website;
var notify = conf["reply-notifications-default-enabled"] ? " checked" : '';
return "" +
"<div class='isso-postbox'>"
+ "<div class='isso-form-wrapper'>"
+ "<div class='isso-textarea-wrapper'>"
+ "<div class='isso-textarea isso-placeholder' contenteditable='true'>" + i18n('postbox-text') + "</div>"
+ "<div class='isso-preview'>"
+ "<div class='isso-comment'>"
+ "<div class='isso-text-wrapper'>"
+ "<div class='isso-text'></div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "</div>"
+ "<section class='isso-auth-section'>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='author' placeholder='" + i18n('postbox-author') + "' value='" + (author ? author : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='email' name='email' placeholder='" + i18n('postbox-email') + "' value='" + (email ? email : '') + "' />"
+ "</p>"
+ "<p class='isso-input-wrapper'>"
+ "<input type='text' name='website' placeholder='" + i18n('postbox-website') + "' value='" + (website ? website : '') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='submit' value='" + i18n('postbox-submit') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='preview' value='" + i18n('postbox-preview') + "' />"
+ "</p>"
+ "<p class='isso-post-action'>"
+ "<input type='button' name='edit' value='" + i18n('postbox-edit') + "' />"
+ "</p>"
+ "</section>"
+ "<section class='isso-notification-section'>"
+ "<label>"
+ "<input type='checkbox'" + notify + " name='notification' />" + i18n('postbox-notification')
+ "</label>"
+ "</section>"
+ "</div>"
+ "</div>"
};
module.exports = html;
| BBaoVanC | 0b33a29fe58ccc128f460df26f9e489625ecb235 | 795aed68ea5c39ccb4d96b3a5e1a660d2ea4174b | Eh, it's nitpicking and I'm not even qualified to comment on any Javascript styles. I was hoping for someone to have a better idea, but it's not important. Leave it as-is. | ix5 | 24 |
Subsets and Splits