forum_id
stringlengths
8
20
forum_title
stringlengths
1
899
forum_authors
sequencelengths
0
174
forum_abstract
stringlengths
0
4.69k
forum_keywords
sequencelengths
0
35
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
20
note_type
stringclasses
6 values
note_created
int64
1,360B
1,737B
note_replyto
stringlengths
4
20
note_readers
sequencelengths
1
8
note_signatures
sequencelengths
1
2
venue
stringclasses
349 values
year
stringclasses
12 values
note_text
stringlengths
10
56.5k
JBH3mtjG9I
FreqRISE: Explaining time series using frequency masking
[ "Thea Brüsch", "Kristoffer Knutsen Wickstrøm", "Mikkel N. Schmidt", "Tommy Sonne Alstrøm", "Robert Jenssen" ]
Time series data is fundamentally important for many critical domains such as healthcare, finance, and climate, where explainable models are necessary for safe automated decision-making. To develop explainable artificial intelligence in these domains therefore implies explaining salient information in the time series. Current methods for obtaining saliency maps assumes localized information in the raw input space. In this paper, we argue that the salient information of a number of time series is more likely to be localized in the frequency domain. We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain, and outperforms strong baselines across a number of tasks.
[ "Explainability", "Time series data", "Audio data" ]
https://openreview.net/pdf?id=JBH3mtjG9I
https://openreview.net/forum?id=JBH3mtjG9I
Wn2RU0DK8X
official_review
1,728,481,697,543
JBH3mtjG9I
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission11/Reviewer_SgPr" ]
NLDL.org/2025/Conference
2025
title: Interesting proposal, but lacking crucial information to establish its significance summary: The authors propose an explainable model to highlight the importance of frequencies in signals. By masking the time-frequency representation, the authors can output a relevance map of the frequencies by time, providing additional information compared to simply masking on either the time or frequency domains alone. The proposal is tested against two baselines with two datasets (one synthetic and one real) for three tasks. strengths: The paper presents an innovative approach by masking the time-frequency representation and producing a relevance map of the time-frequency representation. weaknesses: The paper lacks details and misses important baselines. It needs rewriting to make the content clearer. Additionally, the story should be, in my opinion, revised from time series to signal as nothing guarantee that it can be applied to usual time series. confidence: 4 justification: The proposal is interesting and the results look promising, but I suggest rejecting this paper as it lacks clarity and needs more comparison to clearly draw conclusions on its relevance. Please find below more details. ## Goal According to the authors (line 111), > Our aim is to explain the black box model $f(·)$. Is this goal achieved? The authors manage to generate a time-frequency relevance map, but does it help in better understanding $f$? According to the experiments, it seems that the proposal increases complexity, so is this proposal more explainable? Lines 323-325 state, > [...] indicating that FreqRISE struggles more to identify relevant features in the time-frequency domain. Doesn’t this mean that the proposal is not efficient? ## Clarity ### Figure 5 > The relevance map shows that most relevance is put just at the beginning of the signal. As far as I understand the figure and following Figure 4, where dark blue shows relevance, the relevance in Figure 5 is around 0.5 (dark blue area), and not at the beginning, where it looks like there is no relevance. Important information is missing, such as the meaning of the color bar and why the time-frequency relevance is useful to explain $f$. > This shows the benefit of having both the time and frequency component when computing relevance maps for the digit task. But what can we conclude here? How is it relevant for the gender task? The frequency domain relevance seems sufficient. For the reader to really appreciate the importance of the time-frequency relevance, the authors should provide the equivalent of Figures 4 and 5 for the digit task and the synthetic dataset. As of now, it seems the time-frequency domain significantly increases complexity, improves results, but does not enhance explainability. ### Proposal The proposal is freqRISE. Then, why do the authors sometimes refer to it as RISE or (freq)RISE? Something is not clear here. If I understand correctly, freqRISE is an enhanced version of RISE; it has the same properties as RISE (time-based relevance) and also produces time-frequency relevance. Therefore, when referring to freqRISE, the authors should use the name freqRISE, not RISE. ## Baselines This brings me to my next comment: Why is RISE not a baseline? FreqRISE, being an enhanced version, should provide better performance than RISE, but this is not shown in the paper. If RISE only increases complexity without improving performance, does it make it a relevant enhancement? If the authors want to demonstrate that masking in the frequency domain is better than masking in the time domain, they need to compare with such solutions. Additionally, why are TimeREISE and RELAX not baselines? ## Story The proposal primarily focuses on signals rather than general time series data. Therefore, I would suggest replacing the mention of "time series" with "audio signals" or "time signals." There is no guarantee that the proposed methods and baselines would work for typical time series data such as electricity consumption, solar generation, traffic, weather, or exchange rates (especially for exchange rates, where frequency might not be representative). Time series data are usually collected over several years making datasets complex. Additionally, the authors mentioned that for audioMNIST, their proposal would require a large number of masks (20 000). For usual time series data, it would likely require even more masks (especially considering that some of these datasets have multiple features), significantly increasing complexity. For these reasons, I would revise the introduction to refer to "time signals" instead of "time series." ## Additional Comments for Future Revision > [...] provide a comprehensive evaluation of our proposed approach across several datasets and tasks. Only 2 datasets and 3 tasks are considered; can we talk about several? Line 196: > Both models achieve an accuracy of 100%. This should be in the results discussion, not in the dataset description. The same applies to lines 215-217. ### Figure 1 Figure 1: STDFT in the caption should be the full name. But the framework in Figure 1 is a generic framework with $g$ being the function to transpose input from the time to frequency domain, so the caption should not mention STDFT or just that STDFT is an example of the function $g$. ### Figure 4 I am not sure what we are looking at in Figure 4, especially for LRP. More explanations would be required. For FreqRISE, the blue line in the frequency domain are the relevant frequencies for the task, and then what do the blue lines in the time domain indicate? Are they the temporal positions where these frequencies are relevant? Or are they the relevant time steps. LRP, working in the time domain, operates oppositely. It determines the relevant time step for the given task and then we can identify which frequencies correspond to these time steps in the frequency domain? Also, the authors should use subcaptions such as a, b, c, d to make reading and referencing easier instead of using "top" and "bottom". ### Figure 3 The authors should compare their model with a dummy model that randomly selects the class. For instance, when determining the gender class, the dummy model has a 50% chance of being correct. A complex model that does not perform better than the dummy model should not be considered efficient. The dummy model, which purely guesses randomly, is not affected by removing frequencies and should have constant accuracy. However, the relevance of the "baselines" **Rand.** and **Amp.** is unclear. It is not specified which model was used to obtain the corresponding plots. If the authors intend to compare different methods of removing frequencies (1. from most important to less important based on the relevance map, 2. randomly, and 3. based on amplitude), they should provide plots for each method using the different baselines (IG, LRP, freqRISE). Or at least provide more information of what **Rand.** and **Amp.** represents. final_rebuttal_confidence: 4 final_rebuttal_justification: Thank you to the authors for addressing my comments and engaging in an interesting exchange. I find the proposal compelling, and the results are promising. However, this version of the paper would benefit from additional comparative analysis and, above all, greater clarity to better establish the proposal's relevance. Additionally, the datasets used may not capture the full range of potential applications, making it challenging to generalize the performance findings presented. Other reviewers also noted insufficient discussion of the results and a lack of guidance on how to use the proposal (especially select the different hyperparameters). Together, these points contribute to my recommendation to reject this paper in its current form.
INmPMGFlwR
Smooth-edged Perturbations Improve Perturbation-based Image Explanations
[]
Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models by perturbing parts of the input to measure how those parts affect the output. Due to the intractability of perturbing each pixel individually, images are typically attributed to larger segments. The Randomized Input Sampling for Explanations (RISE) method solved this issue by using smooth perturbation masks. While this method has proven effective and popular, it has not been investigated which parts of the method are responsible for its success. This work tests many combinations of mask sampling, segmentation techniques, smoothing, and attribution calculation. The results show that the RISE-style pixel attribution is beneficial to all evaluated methods. Furthermore, it is shown that attribution calculation is the least impactful parameter. The code and data gathered in this work are available online: Removed for anonymization.
[ "XAI", "post-hoc", "model-agnostic", "perturbation-based", "occlusion", "computer vision" ]
https://openreview.net/pdf?id=INmPMGFlwR
https://openreview.net/forum?id=INmPMGFlwR
ud0iN0Q788
official_review
1,728,517,179,352
INmPMGFlwR
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission24/Reviewer_5sjX" ]
NLDL.org/2025/Conference
2025
title: Review summary: In this paper, the authors provide an analysis on the interaction of different components of the perturbation based feature attribution pipeline. The authors use existing off-the shelf methods to understand that pipeline which includes segmentation, perturbation and attribution analysis. Pre-trained CNNs such as VGG and Alexnet are used for evaluation and metrics such as SRG are reported for quantitative evaluation of each combination. strengths: The paper provides a detailed overview of perturbation based methods for XAI and illustrates every component well. The paper has very limited typos. weaknesses: 1. The main novelty and key intuitions behind this work is a bit unclear to me. Is the goal of the paper understanding RISE or is to conduct a benchmark study on the different components of the perturbation based attribution pipeline? Upon reading the paper a few times, I think it is the latter although it is not very clear. 2. How does this approach compare with existing gradient based methods such as Grad-CAM? Although the authors have briefly mentioned this in the discussion section, this is still needed to understand the impact of the contribution. 3. In addition to the SRG metrics, can the authors also utilize other popular metrics such as log-odds ratio[1] to evaluate performance? 4. Can the authors qualitatively contrast between the proposed approach and the related papers [1, 2]? 5. Can the authors explain why the quantitative evaluation has been conducted only on 2% of the validation dataset? [1] Schwab, P.; and Karlen, W. 2019. CXPlain: Causal explanations for model interpretation under uncertainty. In Advances in Neural Information Processing Systems, 10220–10230. [2] Lakkaraju, H.; Arsov, N.; and Bastani, O. 2020. Robust Black Box Explanations Under Distribution Shift. International Conference on Machine Learning (ICML) . confidence: 3 justification: While the paper attempts to understand the impact of different components of the XAI (perturbation based methods) pipeline, the main contributions and insights are not very clear in this version. The paper needs significant updates before being accepted. Hence, I recommend rejection. final_rebuttal_confidence: 3 final_rebuttal_justification: While the authors have addressed several of my concerns, I believe that paper needs to be significantly updated to better clarify the motivation and practicality of the ablation study. Hence I am maintaining my original rating.
INmPMGFlwR
Smooth-edged Perturbations Improve Perturbation-based Image Explanations
[]
Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models by perturbing parts of the input to measure how those parts affect the output. Due to the intractability of perturbing each pixel individually, images are typically attributed to larger segments. The Randomized Input Sampling for Explanations (RISE) method solved this issue by using smooth perturbation masks. While this method has proven effective and popular, it has not been investigated which parts of the method are responsible for its success. This work tests many combinations of mask sampling, segmentation techniques, smoothing, and attribution calculation. The results show that the RISE-style pixel attribution is beneficial to all evaluated methods. Furthermore, it is shown that attribution calculation is the least impactful parameter. The code and data gathered in this work are available online: Removed for anonymization.
[ "XAI", "post-hoc", "model-agnostic", "perturbation-based", "occlusion", "computer vision" ]
https://openreview.net/pdf?id=INmPMGFlwR
https://openreview.net/forum?id=INmPMGFlwR
qIUbTKNATv
official_review
1,728,418,952,531
INmPMGFlwR
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission24/Reviewer_VRaE" ]
NLDL.org/2025/Conference
2025
title: Concerns on evaluation robustness and connection to human interpretability summary: The submitted paper explores perturbation-based post-hoc image explanation methods used in deep learning, focusing on explaining model predictions by occluding parts of an input image and measuring the impact on the model's output. The paper builds on existing techniques such as Randomized Input Sampling for Explanations (RISE), by evaluating the effectiveness of smooth perturbation masks across different explanation pipelines. Its key contribution is an evaluation of how smooth perturbation masks perform across different frameworks, testing combinations of segmentation, perturbation, sampling, attribution, and models. The authors analyze how each element in the pipeline influences explanation quality, using Symmetric Relevance Gain (SRG) as the primary evaluation metric. strengths: 1. The authors perform a detailed evaluation of various parameters involved in the perturbation-based explanation pipeline. By testing combinations of segmentation techniques, sampling methods, and attribution calculation approaches, the paper offers a thorough empirical analysis. This level of experimentation and comparison provides insights into which parts of the pipeline contribute most to performance, contributing to a deeper understanding of these techniques. 2. The methods are evaluated on multiple well-established CNN architectures (AlexNet, VGG-16, ResNet-50), which ensures that the proposed techniques are broadly applicable across different model types. 3. Code and data is available, enabling reproducibility. weaknesses: I have concerns regarding the evaluation, particularly with the use of the term "interpretability." The evaluation appears to focus more on the model's sensitivity to input perturbations rather than providing a true measure of interpretability. To support claims of interpretability, it would be valuable to assess whether the explanations generated by the model are understandable and align with human intuition, potentially by involving domain experts in the evaluation process. Given the cost of involving humans, an alternative could be to conduct experiments using synthetic data in controlled environments. With synthetic data, the ground truth of which features should influence a prediction is known, allowing for a more rigorous comparison of the method's explanations against these known features. Here are my main questions and concerns: 1. The act of occluding parts of an image introduces perturbations that could affect the model in ways that aren’t directly related to how it normally processes the image, potentially leading to misleading assessments of importance. The way pixels or regions are occluded may introduce unintended biases, making it harder to assess the true impact of the occluded regions on the prediction. It would be beneficial for the paper to provide further elaboration on how these biases are mitigated or accounted for. 2. Based on the methods described, it appears that the occlusion metric primarily measures the sensitivity of the model to the removal of image regions rather than its true explainability. Sensitivity reflects how model outputs change when portions of an image are occluded, but this does not necessarily correspond to human-intuitive explanations. For example, a model might be highly sensitive to patterns in an image's background, which, while influential to the model, might not offer a meaningful explanation to a human observer regarding why a particular prediction (e.g., identifying a "dog") was made. I ask for further clarification on whether the identified regions are inherently meaningful or interpretable for humans. This would strengthen the analysis. 3. How does this relate to causal inference, spurious correlations etc? A model might be overly sensitive to spurious features (e.g., noise or irrelevant patterns), which could lead to explanations that highlight regions that don’t align with meaningful human explanations. How does the approach ensure that it avoids emphasizing these spurious features? 4. The paper sets occluded values to the mean pixel value of the image. However, it would be important to evaluate the sensitivity of the results to other occlusion techniques, such as using the median value, random values, or dataset-wide mean values. Is there a reason for not doing this? For me, it seems that an exploration of how different occlusion strategies affect the explanation would enhance the robustness of the findings. 5. The use of only 1,000 validation images of ImageNet is not sufficiently justified, nor is the decision to use the validation set instead of the test set. The authors should have justified why this sample size is representative and how it was selected. 6. Evaluating the reliability of the explanation methods is complicated by the lack of ground truth, which would require complete transparency into the model’s decision-making process -- something these explanation techniques are themselves trying to provide. This circular issue should be discussed, and suggestions for addressing it in future work would be appreciated. 7. A comparison with gradient-based post-hoc explanation methods would provide valuable context. Gradient-based methods are well-established in the explainability literature, and a comparison would allow for a more comprehensive evaluation of the proposed method's performance. 8. It would benefit the clarity of the paper if a formal problem definition were introduced earlier, following the introduction. Additionally, the optimization of different methods could be more broadly contextualized to outline their specific goals and constraints within the broader field of explainability research. confidence: 3 justification: While the paper makes a meaningful contribution by exploring perturbation-based explanations in detail, its claims of interpretability are weakened by a narrow evaluation framework that primarily measures sensitivity. To strengthen the paper, the authors could explore alternative evaluation metrics: * insertion and deletion tests to assess the faithfulness of the explanations * counterfactual explanations to provide causal insights into the model’s decision-making process * human centric metrics Further, I ask the authors to justify their experimental choices more clearly. These additions would allow for a more comprehensive understanding of the proposed methods and their impact on explainability and interpretability. Moreover, the paper uses only 1,000 validation images from ImageNet without sufficient justification for this sample size. This choice raises concerns about whether the results generalize to a broader dataset, and the decision to use the validation set instead of the test set lacks proper explanation. A more detailed discussion of how this sample size was selected and its representativeness would be beneficial.
INmPMGFlwR
Smooth-edged Perturbations Improve Perturbation-based Image Explanations
[]
Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models by perturbing parts of the input to measure how those parts affect the output. Due to the intractability of perturbing each pixel individually, images are typically attributed to larger segments. The Randomized Input Sampling for Explanations (RISE) method solved this issue by using smooth perturbation masks. While this method has proven effective and popular, it has not been investigated which parts of the method are responsible for its success. This work tests many combinations of mask sampling, segmentation techniques, smoothing, and attribution calculation. The results show that the RISE-style pixel attribution is beneficial to all evaluated methods. Furthermore, it is shown that attribution calculation is the least impactful parameter. The code and data gathered in this work are available online: Removed for anonymization.
[ "XAI", "post-hoc", "model-agnostic", "perturbation-based", "occlusion", "computer vision" ]
https://openreview.net/pdf?id=INmPMGFlwR
https://openreview.net/forum?id=INmPMGFlwR
ebzAcerE6U
official_review
1,727,345,046,379
INmPMGFlwR
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission24/Reviewer_WvMh" ]
NLDL.org/2025/Conference
2025
title: Smooth-edged Perturbations Improve Perturbation-based Image Explanations summary: Based on the success of RISE (Randomized Input Sampling for Explanations) in the field of XAI, this paper studies a range of combinations of perspectives, including sampling, segmentation, and XAI methods which the authors call the attribution calculations. It shows that the RISE-based pixel-based methods are effective, in particular when the sample size is large. Although it focuses on some specific image datasets and CNN models, this paper reveals the dependency of SRG values on the sampling methods (random/entropic) regardless of the attribution. strengths: In this paper, the authors compare a variety of aforementioned combinations. The perspectives in their experiments are clearly described and the results are exhibited. Not only being interesting, it is important to examine how XAI methods work well when the possible combinations of methods and attributions are changed. weaknesses: Although this study works on an important topic, it seems that the conclusion (discussion) section is not so clear in the sense that what has been done when compared to the original purpose is not clear. For example, in Table 3, it seems that LIME exhibits the best SRG value with 4000/8000 random sampling with the value of 25.8. However, it's not clear whether this is sufficient or not when we recall the assertion of the abstract stating "the RISE-style pixel attribution is beneficial", because we don't know how we can compare this result to the performance of original LIME. The authors explain the definition of the SRG value right before Section 3, so I think it is better if the sufficient value for the explanation is defined aroud there. Moreover, the word "RISE-style pixel attibution" sounds unclear. I think it would be better if this part is re-phrased in a more general and easy-to-understand one. A question to the authors: could you elaborate on the relationship between Tables 2 and 3, for I cannot find the method of sampling in Table 2 for example. confidence: 4 justification: Although the authors conducted the experiments on the comparison of extensive combinations, my major concern is that the assertion of the paper is not clear. They assert that "the RISE-style pixel attribution is beneficial to all evaluated methods" in Abstract, but it is not clear from the results, mainly Tables 2 and 3, for they did not compare the original performance and the one with the proposed method. From Tables 2 and 3, it may give an impression that the original RISE is almost sufficient. Therefore, in order to improve this paper, it seems better to compare the original methods and those proposed in this paper. Second, the authors should explain why SRG value is good as an evaluation major of XAI methods. Third, it would be better if the authors employ statistical hypothesis testing when they conclude something; for example, in line 349, they say "improves performance", but it sounds qualitative without talking about the statistical significance. The same applies to lines 363--364. The results exhibited here are limited only to image datasets, so in Section 4, it would be better to talk about extending this approach to other types of datasets. In machine learning, the no-free-lunch theorem tells us that no specific model is always effective. It might be the case to the field of XAI, which implies the possibility of different results depending on the types of the datasets. The conditions of Tables 2 and 3 should be clarified. For example, in Table 2, I could not find the method of sampling. Finally, if possible, the authors should polish up the English description in the main text. For instance, "implementation of and data" in Abstract would be "implementation of data", and "is available online" should be "are available online". There are similar parts in the main text. To sum up, I think this paper falls into the region that is below the border of acceptance. final_rebuttal_confidence: 3 final_rebuttal_justification: I found that some of my concerns may be removed. For example, they say that they have already conducted the hypothesis test to quantitatively support their assertions and that they will make the assertions clearer and check English writing. These would lead to the additional strengths of this paper. However, some of my concerns still exist. (i) They didn't clarify the experimental conditions of Table 2 (sampling method) that was raised in my comment. (ii) They say that increasing the datasets is difficult. Although they say that the evaluation with 2% of the dataset was not significant (p<0.05) with the experiments with limited data, I think it is better then to describe the details on this part and to deduce some statistical indicators such as confidence interval, which would be beneficial to support their assertion. Based on these, I think I'd be better off retaining my original rating (2).
INmPMGFlwR
Smooth-edged Perturbations Improve Perturbation-based Image Explanations
[]
Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models by perturbing parts of the input to measure how those parts affect the output. Due to the intractability of perturbing each pixel individually, images are typically attributed to larger segments. The Randomized Input Sampling for Explanations (RISE) method solved this issue by using smooth perturbation masks. While this method has proven effective and popular, it has not been investigated which parts of the method are responsible for its success. This work tests many combinations of mask sampling, segmentation techniques, smoothing, and attribution calculation. The results show that the RISE-style pixel attribution is beneficial to all evaluated methods. Furthermore, it is shown that attribution calculation is the least impactful parameter. The code and data gathered in this work are available online: Removed for anonymization.
[ "XAI", "post-hoc", "model-agnostic", "perturbation-based", "occlusion", "computer vision" ]
https://openreview.net/pdf?id=INmPMGFlwR
https://openreview.net/forum?id=INmPMGFlwR
B4axRvNSPT
decision
1,730,901,555,492
INmPMGFlwR
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
INmPMGFlwR
Smooth-edged Perturbations Improve Perturbation-based Image Explanations
[]
Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models by perturbing parts of the input to measure how those parts affect the output. Due to the intractability of perturbing each pixel individually, images are typically attributed to larger segments. The Randomized Input Sampling for Explanations (RISE) method solved this issue by using smooth perturbation masks. While this method has proven effective and popular, it has not been investigated which parts of the method are responsible for its success. This work tests many combinations of mask sampling, segmentation techniques, smoothing, and attribution calculation. The results show that the RISE-style pixel attribution is beneficial to all evaluated methods. Furthermore, it is shown that attribution calculation is the least impactful parameter. The code and data gathered in this work are available online: Removed for anonymization.
[ "XAI", "post-hoc", "model-agnostic", "perturbation-based", "occlusion", "computer vision" ]
https://openreview.net/pdf?id=INmPMGFlwR
https://openreview.net/forum?id=INmPMGFlwR
4UbX260G96
meta_review
1,730,556,590,883
INmPMGFlwR
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission24/Area_Chair_ERJW" ]
NLDL.org/2025/Conference
2025
metareview: The paper proposes an analysis of perturbation-based image explanations by testing multiple combinations of the different building blocks of the pipeline (segmentation, perturbation, etc.). Both the approach and most of the experimental design choices are sensible. All the reviewers agree on the relevance of the topic and offered constructive advice on the approach and its evaluation. As a suggestion, it would be good to discuss the scope and limitations earlier on in the paper to help the reader understand and narrow down the different choices that come later with the experimental design. However, even thought the authors managed to address many of the issues raised by the reviewers, there are some that remain after the rebuttal phase. The main one is around clarity (pointed by the reviewers but also acknowledged by the authors, and particularly important for the discussion in the paper). The manuscripts and its presentation could use further work and this is an aspect that would greatly improve the paper if properly tackled. The authors hinted at a few possible ways to change the manuscript, but it is not fully clear how a final version of the paper would look like — which makes the current assessment difficult. Additionally, there are some other comments that were not addressed in full (e.g., lack of details around Table 2, use of synthetic data for evaluation) or not to the full satisfaction of the reviewers (size/choice of the dataset or statistical significance). recommendation: Reject suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 4: The area chair is confident but not absolutely certain
CGkyjTXomz
Exploring Segment Anything Foundation Models for Out of Domain Crevasse Drone Image Segmentation
[ "Steven Wallace", "Aiden Durrant", "William David Harcourt", "Richard Hann", "Georgios Leontidis" ]
In this paper, we explore the application of Segment Anything (SAM) foundation models for segmenting crevasses in Uncrewed Aerial Vehicle (UAV) images of glaciers. We evaluate the performance of the SAM and SAM 2 models on ten high-resolution UAV images from Svalbard, Norway. Each SAM model has been evaluated in inference mode without additional fine-tuning. Using both automated and manual prompting methods, we compare the segmentation quantitatively using Dice Score Coefficient (DSC) and Intersection over Union (IoU) metrics. Results show that the SAM 2 Hiera-L model outperforms other variants, achieving average DSC and IoU scores of 0.43 and 0.28 respectively with automated prompting. However, the overall off-the-shelf performance suggests that further improvements are still required to enable glaciologists to examine crevasse patterns and associated physical processes (e.g. iceberg calving), indicating the need for further fine-tuning to address domain shift challenges. Our results highlight the potential of segmentation foundation models for specialised remote sensing applications while also identifying limitations in applying them to high-resolution UAV images, as well as ways to enhance further model performance on out-of-domain glacier imagery, such as few-shot and weakly supervised learning techniques.
[ "Deep Learning", "Foundation Models", "Image Segmentation", "Climate Science", "Remote Sensing" ]
https://openreview.net/pdf?id=CGkyjTXomz
https://openreview.net/forum?id=CGkyjTXomz
oCO5b39Z62
meta_review
1,730,392,044,829
CGkyjTXomz
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission25/Area_Chair_ek4J" ]
NLDL.org/2025/Conference
2025
metareview: Overall, the authors have successfully improved the paper by incorporating the reviewers' feedback, enhancing its clarity for the reader. Based on the reviewers' comments on the authors' rebuttal and my own assessment of the paper, I recommend accepting this paper as a poster presentation. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 3: The area chair is somewhat confident
CGkyjTXomz
Exploring Segment Anything Foundation Models for Out of Domain Crevasse Drone Image Segmentation
[ "Steven Wallace", "Aiden Durrant", "William David Harcourt", "Richard Hann", "Georgios Leontidis" ]
In this paper, we explore the application of Segment Anything (SAM) foundation models for segmenting crevasses in Uncrewed Aerial Vehicle (UAV) images of glaciers. We evaluate the performance of the SAM and SAM 2 models on ten high-resolution UAV images from Svalbard, Norway. Each SAM model has been evaluated in inference mode without additional fine-tuning. Using both automated and manual prompting methods, we compare the segmentation quantitatively using Dice Score Coefficient (DSC) and Intersection over Union (IoU) metrics. Results show that the SAM 2 Hiera-L model outperforms other variants, achieving average DSC and IoU scores of 0.43 and 0.28 respectively with automated prompting. However, the overall off-the-shelf performance suggests that further improvements are still required to enable glaciologists to examine crevasse patterns and associated physical processes (e.g. iceberg calving), indicating the need for further fine-tuning to address domain shift challenges. Our results highlight the potential of segmentation foundation models for specialised remote sensing applications while also identifying limitations in applying them to high-resolution UAV images, as well as ways to enhance further model performance on out-of-domain glacier imagery, such as few-shot and weakly supervised learning techniques.
[ "Deep Learning", "Foundation Models", "Image Segmentation", "Climate Science", "Remote Sensing" ]
https://openreview.net/pdf?id=CGkyjTXomz
https://openreview.net/forum?id=CGkyjTXomz
P16QqMqoKB
official_review
1,727,263,901,033
CGkyjTXomz
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission25/Reviewer_YfWd" ]
NLDL.org/2025/Conference
2025
title: Application of Segment Anything (foundation) models for detecting crevasses in UAV imagery summary: The paper aims to quantify the generalization ability of the foundation models known as SAM (Segment anything Model) and SAM 2 on UAV imagery of arctic glaciers for the purpose of detecting crevasses. Without any baselines, the paper compares several variants of SAM and SAM2 and employs the Dice score (DSC) and intersection-over-union (IOU) as metrics for performance. The paper concludes that the best-performing model is not good enough for downstream applications and fine-tuning is required. strengths: The paper addresses an important and well-motivated problem. Overall, the paper is well-written, well-structured, and easy to follow. weaknesses: The only novelty of the paper is the application of SAM and SAM2 to UAV data, but the paper provides very little details about the applied methodology. Despite the use of off-the-shelf models, it seems like it would be very hard to reproduce the results. There are no details about how each of the models, prompts, etc. are configured. Moreover, the evaluation of the model is also quite weak. First of all, there are no baseline models included as a reference. If the authors had provided the performance metrics for simple baselines as predicting the majority class or random guessing or a stronger baseline such as a trained U-NET (even for the small dataset they have) would help put the reported performance metrics into perspective. The simple baselines are really relevant since several of the models seem to perform very poorly (i.e. IoC as low as 0.03). The choice of metrics also seems a bit odd. The paper uses IoU and DSC, which are essentially quantifying the same thing (you can write one as a function of the other). The similarity of the metrics is also very evident from Figure 3. It would have been much more informative to include other metrics or use the space for elaborating on the methodology. Finally, it is not clear to me what the purpose of the analysis of Figure 3. Minor details: The abbreviation 'DSC' does not seem to be defined, but I assume it is the Dice score. confidence: 3 justification: Since the sole contribution of the paper is the application and evaluation of SAM models on a new type of data, it is problematic that the methodology is carefully described and the evaluation is weak (no baseline methods included, using IoC and DSC which are heavily correlated). final_rebuttal_confidence: 4 final_rebuttal_justification: Thank you for the rebuttal. However, I am maintaining my score because of the weak experimental evaluation as my concern about basics, such as proper metrics, relevant baselines etc. must be included to help put the results in perspective and therefore baselines cannot be left for future work.
CGkyjTXomz
Exploring Segment Anything Foundation Models for Out of Domain Crevasse Drone Image Segmentation
[ "Steven Wallace", "Aiden Durrant", "William David Harcourt", "Richard Hann", "Georgios Leontidis" ]
In this paper, we explore the application of Segment Anything (SAM) foundation models for segmenting crevasses in Uncrewed Aerial Vehicle (UAV) images of glaciers. We evaluate the performance of the SAM and SAM 2 models on ten high-resolution UAV images from Svalbard, Norway. Each SAM model has been evaluated in inference mode without additional fine-tuning. Using both automated and manual prompting methods, we compare the segmentation quantitatively using Dice Score Coefficient (DSC) and Intersection over Union (IoU) metrics. Results show that the SAM 2 Hiera-L model outperforms other variants, achieving average DSC and IoU scores of 0.43 and 0.28 respectively with automated prompting. However, the overall off-the-shelf performance suggests that further improvements are still required to enable glaciologists to examine crevasse patterns and associated physical processes (e.g. iceberg calving), indicating the need for further fine-tuning to address domain shift challenges. Our results highlight the potential of segmentation foundation models for specialised remote sensing applications while also identifying limitations in applying them to high-resolution UAV images, as well as ways to enhance further model performance on out-of-domain glacier imagery, such as few-shot and weakly supervised learning techniques.
[ "Deep Learning", "Foundation Models", "Image Segmentation", "Climate Science", "Remote Sensing" ]
https://openreview.net/pdf?id=CGkyjTXomz
https://openreview.net/forum?id=CGkyjTXomz
OpIXLzMZiE
official_review
1,727,874,654,857
CGkyjTXomz
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission25/Reviewer_LSff" ]
NLDL.org/2025/Conference
2025
title: A well-written manusciprt summary: This paper studies drone images of crevasses in Arctic, which is an important but less explored domain. Authors apply SAM and SAM models to segment crevasses and qualitively evaluated the resutls, which suggests that further fine-tuning is still essential for using machine learning models on the crevasses segmentation task. strengths: - The problem of segmenting crevasses in Arctic is an important issue by itself. - Authors take advantage of the most advanced segmentation foundation models. - The paper is overall well written and easy to follow. Authors summarized relate research in a clear way. The evaluation results is also clearly presented. weaknesses: - It seems like the full power of SAM models are not fully elicited. - According to section 4, authors use SAM models to conduct inference in two situations: 1. grid points prompt 2. two single points prompt of background and foreground. - According to Figure 2, grid points for the images may result in point being located in both the crevasse region and non-crevasse region, which will make the segmentation results worse. - Considering the image resolution and the distribution of crevasses in the images, it woule be neccessary to adjust the sparsity of the grid points prompts. - For figure 2, descriptions are missing why the resolution 1024x1024 is chosen. The ground truth of crevasse has quite complex shapes in the current resolution. Considering SA-1B and SA-V datasets are mostly about common objects, decress the dron image resolution to have simpler shape crevasses may help. - (Please forgive me on lacking relevent knowledge.) Five images seem like too few for a valid evalution. Difficulites on collecting data (like cost and time span) is not described in the manuscript. - Further discussion on using SAM models to relieve the burden of annotation can be appended. confidence: 3 justification: Lacking background knowledge on the glacier field, I gave an assessment from the machine learning point of view. - Dataset size is too small. - The effectiveness of the foundation models are not thoroughly examined. There are potentials to improve from both the data side and the model side. final_rebuttal_confidence: 3 final_rebuttal_justification: My concerns on experiment settings are resolved, and the size of the dataset is not a main issue due to the difficulty of collecting these dsata. The quantitative results is also very inclusive. However, the utilization of SAM models is insufficient and page limit is not a relevant reason. I encourage authors to explore and report more details on the results when adjusting the models, including image sizes, image conditions (such as the density of crevasses shown in the image), number of prompt points, format of prompts (such as words or bounding boxes), internal parameters of the models (such as thresholds).
CGkyjTXomz
Exploring Segment Anything Foundation Models for Out of Domain Crevasse Drone Image Segmentation
[ "Steven Wallace", "Aiden Durrant", "William David Harcourt", "Richard Hann", "Georgios Leontidis" ]
In this paper, we explore the application of Segment Anything (SAM) foundation models for segmenting crevasses in Uncrewed Aerial Vehicle (UAV) images of glaciers. We evaluate the performance of the SAM and SAM 2 models on ten high-resolution UAV images from Svalbard, Norway. Each SAM model has been evaluated in inference mode without additional fine-tuning. Using both automated and manual prompting methods, we compare the segmentation quantitatively using Dice Score Coefficient (DSC) and Intersection over Union (IoU) metrics. Results show that the SAM 2 Hiera-L model outperforms other variants, achieving average DSC and IoU scores of 0.43 and 0.28 respectively with automated prompting. However, the overall off-the-shelf performance suggests that further improvements are still required to enable glaciologists to examine crevasse patterns and associated physical processes (e.g. iceberg calving), indicating the need for further fine-tuning to address domain shift challenges. Our results highlight the potential of segmentation foundation models for specialised remote sensing applications while also identifying limitations in applying them to high-resolution UAV images, as well as ways to enhance further model performance on out-of-domain glacier imagery, such as few-shot and weakly supervised learning techniques.
[ "Deep Learning", "Foundation Models", "Image Segmentation", "Climate Science", "Remote Sensing" ]
https://openreview.net/pdf?id=CGkyjTXomz
https://openreview.net/forum?id=CGkyjTXomz
O1hGKNhjgg
official_review
1,728,472,181,490
CGkyjTXomz
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission25/Reviewer_fd4r" ]
NLDL.org/2025/Conference
2025
title: Easy-to-follow paper on segmentation in glaciology domain summary: Authors describe testing pre-trained models on crevasse segmentation in glaciology domain using UAV images of glaciers. One expert glaciologist visually evaluated segmentation results from seven models for the automatic mask generator prompt experiments. Authors report that SAM 2 Hiera-L model performs the best because it has been trained on the largest pre-training dataset and has the largest model architecture of the SAM 2 models. Authors argue that fine-tuning is required because of how far out of the domain the UAV images are from the images in the SA-1B and SA-V training datasets for the SAM and SAM 2 foundation models strengths: Paper is easy to follow and I personally like the evaluation details presented as graphs. The paper could provide significant impact on glaciology weaknesses: 1. I would suggest spending more time on writing introduction and motivation with many more appropriate references since few strong statements lack references. 2. How does the problem of segmenting crevasses relate to broader segmentation challenges in remote sensing? 3. I believe authors lack discussion on what specific attributes of UAV images make them significantly different from the training datasets? 4. Authors did not investigate investigate (or report) whether conditions (e.g., lighting, glacier surface conditions, weather condition) affect false positive or false negative rates? Could additional pre-processing steps mitigate these errors? 5. Could the insights gained from other studies be applicable to this domain (e.g., agriculture, urban planning, etc)? 6. How would authors justify the balance between automated and manual prompting? confidence: 4 justification: I judge the paper as worthy of being shared with the community because it addresses issues in glaciology domain which might be under-represented in papers of similar conferences. I would love to see more abstract discussions stepping away from statistics and models applied. Similar to all data processing and analysis work, I wish authors would spend more time on talking about the data and show deeper understanding of the data from this domain. Perhaps access to the expert glaciologist could be a good start
CGkyjTXomz
Exploring Segment Anything Foundation Models for Out of Domain Crevasse Drone Image Segmentation
[ "Steven Wallace", "Aiden Durrant", "William David Harcourt", "Richard Hann", "Georgios Leontidis" ]
In this paper, we explore the application of Segment Anything (SAM) foundation models for segmenting crevasses in Uncrewed Aerial Vehicle (UAV) images of glaciers. We evaluate the performance of the SAM and SAM 2 models on ten high-resolution UAV images from Svalbard, Norway. Each SAM model has been evaluated in inference mode without additional fine-tuning. Using both automated and manual prompting methods, we compare the segmentation quantitatively using Dice Score Coefficient (DSC) and Intersection over Union (IoU) metrics. Results show that the SAM 2 Hiera-L model outperforms other variants, achieving average DSC and IoU scores of 0.43 and 0.28 respectively with automated prompting. However, the overall off-the-shelf performance suggests that further improvements are still required to enable glaciologists to examine crevasse patterns and associated physical processes (e.g. iceberg calving), indicating the need for further fine-tuning to address domain shift challenges. Our results highlight the potential of segmentation foundation models for specialised remote sensing applications while also identifying limitations in applying them to high-resolution UAV images, as well as ways to enhance further model performance on out-of-domain glacier imagery, such as few-shot and weakly supervised learning techniques.
[ "Deep Learning", "Foundation Models", "Image Segmentation", "Climate Science", "Remote Sensing" ]
https://openreview.net/pdf?id=CGkyjTXomz
https://openreview.net/forum?id=CGkyjTXomz
CNy7kiW8UZ
official_review
1,727,466,385,647
CGkyjTXomz
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission25/Reviewer_X91q" ]
NLDL.org/2025/Conference
2025
title: Good exploration of target problem domain but experimentally weak summary: The paper describes the problem of segmenting images of glaciers in the Arctic to identify glaciers. Unlike glaciers in the Antarctic, those in the Arctic are challenging to segment because the crevasses are smaller. Segment Anything Model (SAM) 1 & 2 are applied in a zero-shot fashion to the problem. It is found that SAM 1 & 2 are confounded by sediment and areas around the crevasse, and the performance of the approach is inadequate as a replacement for manual labelling. Finally, the paper recommends that some form of fine-tuning is required to tackle the problem to the point that the results of the AI model applied would be useful. strengths: - The paper clearly demonstrates a deep level of understanding of domain-specific glacial mechanics, and weaknesses of the described approach are clearly and concisely described - Reflection on the methodology used is clear and detailed - Some novelty is shown through an incremental contribution is made to the problem outlined over a previous approach - The problem described is difficult and non-trivial, citing challenges about sourcing labelled data. - An innovative solution to the problem is presented, making use of the latest advancements in the field of computer vision. - The paper makes the problem and the proposed solution clear and easy to understand - No ethical issues are obviously present - Experiments done are thoroughly analysed given the circumstances - Sources are clearly referenced for all ideas described. - No conflicts of interest or undisclosed affiliations could be found, but given the paper is anonymised it is recommended that the chairs analyse the non-anonymised paper for this. weaknesses: - No link to source code or a data repository appears to have been provided. - It is suggested that all code written is uploaded to a git repository, and data is somehow freely available for other researchers to download - It is unclear why the proposed future work of implementing a fine-tuning based solution was not implemented by the researchers. - Limited insight is shown comparing the approach demonstrated to prior models, as no direct comparison between prior approaches and the proposed solution are given for the target dataset. - The size of the target dataset is extremely limited - No thought is given to alternative problem task formulations, such as depth estimation, classification, or object detection. - Limited significance to the deep learning community is demonstrated - The multi-mask prediction mode is described on lines 348/349, and an improvement in experimental performance is reported, but no metrics are shown and it is unclear if the experiments described contain this improvement or not. confidence: 4 justification: The researchers clearly demonstrate significant understanding of the target problem domain. The problem described is challenging. The paper displays some incremental novelty in improving on a prior approach. However, the paper is experimentally weak. The presented methodology is not directly compared against prior approaches on the target dataset (which is extremely limited in size), even though it appears they are directly comparable. It is also unclear why the researchers did not implement their own suggestion of fine-tuning. However, despite this the paper is clear, and accurately summarises prior work done on the problem by others in the field. It characterises the performance of SAM 1 & 2 on the target problem domain well, providing some useful insights to direct future research to solve the issue.
CGkyjTXomz
Exploring Segment Anything Foundation Models for Out of Domain Crevasse Drone Image Segmentation
[ "Steven Wallace", "Aiden Durrant", "William David Harcourt", "Richard Hann", "Georgios Leontidis" ]
In this paper, we explore the application of Segment Anything (SAM) foundation models for segmenting crevasses in Uncrewed Aerial Vehicle (UAV) images of glaciers. We evaluate the performance of the SAM and SAM 2 models on ten high-resolution UAV images from Svalbard, Norway. Each SAM model has been evaluated in inference mode without additional fine-tuning. Using both automated and manual prompting methods, we compare the segmentation quantitatively using Dice Score Coefficient (DSC) and Intersection over Union (IoU) metrics. Results show that the SAM 2 Hiera-L model outperforms other variants, achieving average DSC and IoU scores of 0.43 and 0.28 respectively with automated prompting. However, the overall off-the-shelf performance suggests that further improvements are still required to enable glaciologists to examine crevasse patterns and associated physical processes (e.g. iceberg calving), indicating the need for further fine-tuning to address domain shift challenges. Our results highlight the potential of segmentation foundation models for specialised remote sensing applications while also identifying limitations in applying them to high-resolution UAV images, as well as ways to enhance further model performance on out-of-domain glacier imagery, such as few-shot and weakly supervised learning techniques.
[ "Deep Learning", "Foundation Models", "Image Segmentation", "Climate Science", "Remote Sensing" ]
https://openreview.net/pdf?id=CGkyjTXomz
https://openreview.net/forum?id=CGkyjTXomz
3EpD7MvC1M
decision
1,730,901,555,687
CGkyjTXomz
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We have decided to offer opportunities for oral presentations in the remaining available slots in the NLDL program. Thus, despite the AC's poster recommendation, we recommend an oral presentation in addition to the poster presentation given the AC's and reviewers' recommendations.
Bmk8uJl4hF
Effects of Node Centrality Measures for Classification Tasks using GNNs
[]
This study explores the impact of feature selection, particularly node centrality measures, on road type classification within a road network graph using Graph Neural Networks (GNNs) and traditional machine learning models. By training six models on three distinct feature sets—primary road characteristics (S1), centrality measures (S2), and a combined feature set (S3)—we analyze how different feature representations affect model accuracy in distinguishing road types. The GraphSAGE model using S1 achieved the highest test accuracy (0.89), indicating that primary road characteristics are highly effective for classification, whereas the Random Forest model performed worst on the same set, achieving only 0.17 accuracy. Visualized embeddings from S1 models reveal effective clustering by road type for models like GraphSAGE, particularly for residential and tertiary roads, underscoring the model’s capability to capture nuanced structural relationships. These findings indicate that feature selection, especially the inclusion of relevant node centrality measures, plays a crucial role in enhancing classification, though further improvement may require hybrid models or additional contextual data sources to address limitations in differentiating road types with overlapping attributes.
[ "graph neural networks", "road type prediction", "network embedding", "graph machine learning" ]
https://openreview.net/pdf?id=Bmk8uJl4hF
https://openreview.net/forum?id=Bmk8uJl4hF
x21cjPSPYo
official_review
1,728,700,444,510
Bmk8uJl4hF
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission43/Reviewer_4i7E" ]
NLDL.org/2025/Conference
2025
title: Review summary: This paper analyzes a road network using GCNs. Specifically, the paper uses a road network datasets from Philippines. The method the paper uses is as follows. It transforms the network into its dual and then adds new features. This feature graph is inputted into a GCN to obtain labels for the type of the road. strengths: The paper works with less explored dataset and application using graph neural networks. weaknesses: I think the paper has a few weaknesses. 1. Novelty. Except for applying an existing technique to an existing problem. I’m not sure what is new. The paper proposes adding new features based on the network properties but then after computing the PCA embedding, decides not to use their new features as you get better clustering without them. 2. Lack of baselines and discussions. The paper then trains the GCN and obtains an accuracy of around 60%. However I do not know how to interpret this without other methods to compare against. 3. The writing could also be improved. There are many acronyms that’s are not defined. Additionally 12 figures for 5 pages is quite a few. I do not think all of them are needed. confidence: 4 justification: The novelty and significance of the paper is quite limited. It is not clear what new insights a reader is supposed to get.
Bmk8uJl4hF
Effects of Node Centrality Measures for Classification Tasks using GNNs
[]
This study explores the impact of feature selection, particularly node centrality measures, on road type classification within a road network graph using Graph Neural Networks (GNNs) and traditional machine learning models. By training six models on three distinct feature sets—primary road characteristics (S1), centrality measures (S2), and a combined feature set (S3)—we analyze how different feature representations affect model accuracy in distinguishing road types. The GraphSAGE model using S1 achieved the highest test accuracy (0.89), indicating that primary road characteristics are highly effective for classification, whereas the Random Forest model performed worst on the same set, achieving only 0.17 accuracy. Visualized embeddings from S1 models reveal effective clustering by road type for models like GraphSAGE, particularly for residential and tertiary roads, underscoring the model’s capability to capture nuanced structural relationships. These findings indicate that feature selection, especially the inclusion of relevant node centrality measures, plays a crucial role in enhancing classification, though further improvement may require hybrid models or additional contextual data sources to address limitations in differentiating road types with overlapping attributes.
[ "graph neural networks", "road type prediction", "network embedding", "graph machine learning" ]
https://openreview.net/pdf?id=Bmk8uJl4hF
https://openreview.net/forum?id=Bmk8uJl4hF
uv5oICqYzD
meta_review
1,730,111,331,273
Bmk8uJl4hF
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission43/Area_Chair_V6qX" ]
NLDL.org/2025/Conference
2025
metareview: The initial version of this paper proposed to use GCN to classify road types using a particular dataset from Philippines. As pointed out by the reviewers, the paper suffers from a number of issues such as - questionable novelty: while the particular task studied in the paper (road type classification) is not commonly found in the litterature, there are a number of relevant papers/techniques pointed out by the reviewers that have not been taken into account. - results lacking comparison to a baseline and showing relatively low performance. - methodology described too vaguely and lacking details - poor structure of the paper The reviewers were quite unanimous about the lack of novelty and insufficient quality of the initial paper, and did not recommend its acceptation. In addition to that, the authors did not make the effort of replying to the points raised by the reviewers during the rebuttal. The authors however submitted a revised version of the paper that was *entirely* rewritten (a comparison between the two versions reveals the lack of intersection). This raises a number of issues, as the reviews submitted become less relevant and makes it difficult to see whether the observations were taken into account. Concerning the revised version of the paper, after getting feedback from the reviewers: - The overall structure of the paper has been improved. - Some comparison with existing relevant litterature for the considered task is still missing. Results are missing a refernence to a baseline from the litterature applied to the same dataset. - The authors spend a lot of time detailing classic GNN constructions, and explaining ChebNet details (which are well-known). The fact that they use the dual graph representation should have been more detailed as this is a crucial point. - Eq. 3 show they are introducing self-loops (albeit without justification), and IMO is not correct as D should be the degree matrix of A+I. - The new results presented are strange: the authors train several models using road features (S1), centrality measure only (S2) or both road features and centrality (S3). While it could be understandable to see a performance difference between S1 and S2, the performance of S3 should at least be as good as the one from S1. However table 2 shows that there are multiple cases (D2, D3, D4, B3) where the performance of S3 is lower than S1. This suggests that there might be experimental issues. The analysis of the results did not provide any explanation regarding this (while IMO this raises string concerns about the validity of experiments) - Details about the dataset structure (class repartition) that were present in the initial submission have been removed. Since this is not a well-known dataset, this should have been kept in order for the reader to better understand the results - Confusion matrices in appendix are missing results about the S1 version, and the comments by reviewer 5H9U remain valid "the confusion matrices given in the appendix does not seem to predict some of the classes at all and also exhibit low accuracy (i.e., high degrees of misclassification)" In conclusion, while the paper has improved, there are still too many open issues with the experiments and results to accept it, I recommend its rejection. recommendation: Reject suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 5: The area chair is absolutely certain
Bmk8uJl4hF
Effects of Node Centrality Measures for Classification Tasks using GNNs
[]
This study explores the impact of feature selection, particularly node centrality measures, on road type classification within a road network graph using Graph Neural Networks (GNNs) and traditional machine learning models. By training six models on three distinct feature sets—primary road characteristics (S1), centrality measures (S2), and a combined feature set (S3)—we analyze how different feature representations affect model accuracy in distinguishing road types. The GraphSAGE model using S1 achieved the highest test accuracy (0.89), indicating that primary road characteristics are highly effective for classification, whereas the Random Forest model performed worst on the same set, achieving only 0.17 accuracy. Visualized embeddings from S1 models reveal effective clustering by road type for models like GraphSAGE, particularly for residential and tertiary roads, underscoring the model’s capability to capture nuanced structural relationships. These findings indicate that feature selection, especially the inclusion of relevant node centrality measures, plays a crucial role in enhancing classification, though further improvement may require hybrid models or additional contextual data sources to address limitations in differentiating road types with overlapping attributes.
[ "graph neural networks", "road type prediction", "network embedding", "graph machine learning" ]
https://openreview.net/pdf?id=Bmk8uJl4hF
https://openreview.net/forum?id=Bmk8uJl4hF
b4FbdM7ztZ
official_review
1,727,180,967,960
Bmk8uJl4hF
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission43/Reviewer_Bavt" ]
NLDL.org/2025/Conference
2025
title: Not good enough summary: This manuscript studies the impact of node centrality measures on the performance of Graph Convolutional Networks (GCNs) in road-type classification. Various node centrality measures, such as betweenness, page rank, closeness, and degree centrality, are investigated. The study reports that using only the primary road features (i.e., road length, width, speed limit, and traffic direction) without additional centrality scores significantly improves classification accuracy. strengths: I could not find any strength. weaknesses: There are many weak points in this manucsript. I enumerate some of them below. 1. Grammatical Errors and Poor Writing Quality: The manuscript contains many grammatical errors and poorly written explanations, which makes it difficult to follow. This lack of clarity seriously undermines the readability. 2. Unclear Methodological Explanations: The methodology section is inadequately described, and key details necessary for the reader to understand the study’s approach are either missing or unclear. Especially, the transformation from primal graph to dual graph is not adequately explained. 3. Lack of Explanation about Difference from Previous Work: Since the use of GCNs for road type classification is not new, the authors need to explain the difference between their work and existing studies. But a clear explanation is not given. 4. Unclear Results and Conclusions: The result is inadequately presented and difficult to interpret. Several problems can be raised. (1) The explanation of how the results were obtained is unclear. (2) The discussion around the findings is vague. For example, it is mentioned that excluding node centrality scores improves accuracy, but the rationale behind this is not well articulated. (3) The experimental results supporting the main argument, using only the primary road features gives the best classification accuracy, are weak. confidence: 2 justification: As discussed in the Weakness section, this manuscript has a lot of flaws such as poor writing quality, unclear methodology explanation, lack of novelty, and weak experimental results. I am certainly not familiar with the topic treated in the manuscript, but I strongly think this kind of low quality stuff should not be accepted.
Bmk8uJl4hF
Effects of Node Centrality Measures for Classification Tasks using GNNs
[]
This study explores the impact of feature selection, particularly node centrality measures, on road type classification within a road network graph using Graph Neural Networks (GNNs) and traditional machine learning models. By training six models on three distinct feature sets—primary road characteristics (S1), centrality measures (S2), and a combined feature set (S3)—we analyze how different feature representations affect model accuracy in distinguishing road types. The GraphSAGE model using S1 achieved the highest test accuracy (0.89), indicating that primary road characteristics are highly effective for classification, whereas the Random Forest model performed worst on the same set, achieving only 0.17 accuracy. Visualized embeddings from S1 models reveal effective clustering by road type for models like GraphSAGE, particularly for residential and tertiary roads, underscoring the model’s capability to capture nuanced structural relationships. These findings indicate that feature selection, especially the inclusion of relevant node centrality measures, plays a crucial role in enhancing classification, though further improvement may require hybrid models or additional contextual data sources to address limitations in differentiating road types with overlapping attributes.
[ "graph neural networks", "road type prediction", "network embedding", "graph machine learning" ]
https://openreview.net/pdf?id=Bmk8uJl4hF
https://openreview.net/forum?id=Bmk8uJl4hF
YSOOhRR8YE
official_review
1,728,417,047,159
Bmk8uJl4hF
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission43/Reviewer_5H9U" ]
NLDL.org/2025/Conference
2025
title: I consider the paper not suited for publication in its current form. summary: The paper explores graph convolution networks for the prediction of road type based on nodal features encompassing various basic properties of the road such as length and speed limit as well as graph structures in terms of shared intersections with other road segments and enriched with nodal information such as degree, number of shortest paths traversing node etc. The paper finds that basic features are most important for successful prediction of road type in terms of residential, secondary, tertiary and service type. PCA is further employed on the basic properties and graph derived features finding that the basic properties better accounts for information regarding road type in the PCA subspace. strengths: The idea of using GCN and the dual graph exploring road segments as nodes and their intersections as edges is sound. The approach enriching the nodal features with graph derived properties is interesting. The prediction of road type seems underexplored in the literature as opposed to other tasks such as traffic prediction. weaknesses: When predicting road type I would have expected satellite image data to be highly useful based on GPS coordinates of the roads.This has previously been considered in the context of road quality prediction, see also: Brewer, Ethan, et al. "Predicting road quality using high resolution satellite imagery: A transfer learning approach." Plos one 16.7 (2021): e0253370. Would such information be available that could form highly relevant node information not accounted for by the modeling approach. In fact, I would expect high resolution satellite image data to well predict the road type as opposed to relying on basic road features and graph properties. This should be clarified. Whereas road type prediction has been less commonly explored in the literature GCNs have been widely used for traffic prediction. It is unclear why the authors do not use as starting point architectures designed for traffic prediction and compare the performance of these modeling procedures to their present approach. See also: Guo, Kan, et al. "Optimized graph convolution recurrent neural network for traffic prediction." IEEE Transactions on Intelligent Transportation Systems 22.2 (2020): 1138-1149. It would also benefit to clarify what the merits of the present design choices are as opposed to the existing GCNs used in the context of traffic prediction (as opposed to current context of road type prediction), and clarify whether the dual graph representation presently used has previously been explored. The results are not very convincing and the overall accuracy given on page three does not appear better than chance given the imbalance of the classes. This needs to be discussed. Furthermore, the confusion matrices given in the appendix does not seem to predict some of the classes at all and also exhibit low accuracy (i.e., high degrees of misclassification). Furthermore, no error bars are reported and it is thus unclear how prone the results are to fluctuations wrt. parameter initialization, data splits etc. The significance of the results and the strength of the present approach is thus unclear. The paper would also benefit from being restructured such that the methods section details the considered model architectures and their motivations. Furthermore, the paper could improve its presentation as also outlined in the minor comments below. Minor comment The introduction should be broken into sections and reads currently as one very long paragraph – this will help the reader. OSM is not explained as abbreviation – do you mean open street map? The sentence “The specific application of GCN in he context of road network modeling in the Philippines is not evident at the time of writing” – if the application is not evident it is unclear why this study is of interest and importance. Did the authors mean to convey something else with this sentence? – that GCN has not been applied previously in this context? NE is not explained as abbreviation in “a few authors have applied NE…” – do you mean network embeddings? “The raw obtained “ -> “the raw data obtained” QGIS is not explained I assume this refers to the type of geographic information system used. confidence: 4 justification: The results are not very convincing and the approach should be better motivated. The paper needs to be improved also in its presentation and results statistically assessed at least in terms of error bars. It is also unclear what the benefits are of the proposed approach given that basic features appears to be best - i.e. how would a simple logistic regression model using the basic features or a standard feed forward neural network using the basic features perform? - and is the approach taken in this context meritable? In summary, I consider the paper not ready for publication in its current form.
Bmk8uJl4hF
Effects of Node Centrality Measures for Classification Tasks using GNNs
[]
This study explores the impact of feature selection, particularly node centrality measures, on road type classification within a road network graph using Graph Neural Networks (GNNs) and traditional machine learning models. By training six models on three distinct feature sets—primary road characteristics (S1), centrality measures (S2), and a combined feature set (S3)—we analyze how different feature representations affect model accuracy in distinguishing road types. The GraphSAGE model using S1 achieved the highest test accuracy (0.89), indicating that primary road characteristics are highly effective for classification, whereas the Random Forest model performed worst on the same set, achieving only 0.17 accuracy. Visualized embeddings from S1 models reveal effective clustering by road type for models like GraphSAGE, particularly for residential and tertiary roads, underscoring the model’s capability to capture nuanced structural relationships. These findings indicate that feature selection, especially the inclusion of relevant node centrality measures, plays a crucial role in enhancing classification, though further improvement may require hybrid models or additional contextual data sources to address limitations in differentiating road types with overlapping attributes.
[ "graph neural networks", "road type prediction", "network embedding", "graph machine learning" ]
https://openreview.net/pdf?id=Bmk8uJl4hF
https://openreview.net/forum?id=Bmk8uJl4hF
RgTAQGDSKh
decision
1,730,901,556,410
Bmk8uJl4hF
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
Bmk8uJl4hF
Effects of Node Centrality Measures for Classification Tasks using GNNs
[]
This study explores the impact of feature selection, particularly node centrality measures, on road type classification within a road network graph using Graph Neural Networks (GNNs) and traditional machine learning models. By training six models on three distinct feature sets—primary road characteristics (S1), centrality measures (S2), and a combined feature set (S3)—we analyze how different feature representations affect model accuracy in distinguishing road types. The GraphSAGE model using S1 achieved the highest test accuracy (0.89), indicating that primary road characteristics are highly effective for classification, whereas the Random Forest model performed worst on the same set, achieving only 0.17 accuracy. Visualized embeddings from S1 models reveal effective clustering by road type for models like GraphSAGE, particularly for residential and tertiary roads, underscoring the model’s capability to capture nuanced structural relationships. These findings indicate that feature selection, especially the inclusion of relevant node centrality measures, plays a crucial role in enhancing classification, though further improvement may require hybrid models or additional contextual data sources to address limitations in differentiating road types with overlapping attributes.
[ "graph neural networks", "road type prediction", "network embedding", "graph machine learning" ]
https://openreview.net/pdf?id=Bmk8uJl4hF
https://openreview.net/forum?id=Bmk8uJl4hF
H2yySjR3uY
official_review
1,728,549,096,297
Bmk8uJl4hF
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission43/Reviewer_fUwb" ]
NLDL.org/2025/Conference
2025
title: Official review summary: In this submission, the authors train graph convolution neural networks to achieve node-level road network classification. A sophisticated feature extraction method is applied to convert road network data to directed dual multigraph. A GCN is trained based on the extracted graph, and the authors quantitatively analyze the impacts of different node features (the node importance and the primary node features) on the model performance. strengths: 1. The problem is interesting, and the dataset is unique to my knowledge. 2. Applying graph neural networks to solve this classification task is reasonable. Converting road networks to dual multi-graphs makes sense to me. weaknesses: 1. The writing and the organization of this paper are unsatisfying. The title does not specify the road network task, which does not match well with the main content of the paper. The implementation details and the experimental results are not shown in the main pages, and the motivation fo the clustering experiments and the information they released are not explained well. Overall, although the paper is short, its organization and writing are confusing. 2. The experimental part is not solid enough. Basically, the GCNs used in the experiments have the same architecture except for some pre-defined offsets. Why not apply more advanced GNNs, like ChebyNet? In addition, I have no idea why the authors add the offsets to GCNs, especially for the model 3 --- adding any offsets before softmax is meaningless because softmax is shifting invariant. Without any competitive baselines, the rationality of this work is not convincing. confidence: 5 justification: As I mentioned above, this submission has many holes in its writing, organization, and experiments. Its technical quality and novelty are not high enough.
Bkm9j80WTj
Enhancing Fault Detection in Optical Networks with Conditional Denoising Diffusion Probabilistic Models
[ "Meadhbh Healy", "Thomas Martini Jørgensen" ]
The scarcity of high-quality anomalous data often poses a challenge in establishing effective automated fault detection schemes. This study addresses the issue in the context of fault detection in optical fibers using reflectometry data, where noise can obscure the detection of certain known anomalies. We specifically investigate whether classes containing samples of low quality can be boosted with synthetically generated examples characterized by high signal-to-noise ratio (SNR). Specifically, we employ a conditional Denoising Diffusion Probabilistic Model (cDDPM) to generate synthetic data for such classes. It works by learning the characteristics of high SNRs from anomaly classes that are less frequently affected by significant noise. The boosted dataset is compared with a baseline dataset (without the augmented data) by training an anomaly classifier and measuring the performances on a hold-out dataset populated only with high quality traces for all classes. We observe a significant improved performance (Precision, Recall, and F1 Scores) for the noise affected training classes proving the success of our methods.
[ "Denoising", "Signal Processing", "Anomaly Detection" ]
https://openreview.net/pdf?id=Bkm9j80WTj
https://openreview.net/forum?id=Bkm9j80WTj
kGWPUue67I
decision
1,730,901,554,568
Bkm9j80WTj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Poster) comment: We recommend a poster presentation given the AC and reviewers recommendations.
Bkm9j80WTj
Enhancing Fault Detection in Optical Networks with Conditional Denoising Diffusion Probabilistic Models
[ "Meadhbh Healy", "Thomas Martini Jørgensen" ]
The scarcity of high-quality anomalous data often poses a challenge in establishing effective automated fault detection schemes. This study addresses the issue in the context of fault detection in optical fibers using reflectometry data, where noise can obscure the detection of certain known anomalies. We specifically investigate whether classes containing samples of low quality can be boosted with synthetically generated examples characterized by high signal-to-noise ratio (SNR). Specifically, we employ a conditional Denoising Diffusion Probabilistic Model (cDDPM) to generate synthetic data for such classes. It works by learning the characteristics of high SNRs from anomaly classes that are less frequently affected by significant noise. The boosted dataset is compared with a baseline dataset (without the augmented data) by training an anomaly classifier and measuring the performances on a hold-out dataset populated only with high quality traces for all classes. We observe a significant improved performance (Precision, Recall, and F1 Scores) for the noise affected training classes proving the success of our methods.
[ "Denoising", "Signal Processing", "Anomaly Detection" ]
https://openreview.net/pdf?id=Bkm9j80WTj
https://openreview.net/forum?id=Bkm9j80WTj
i0uk3wbiF5
meta_review
1,730,317,592,327
Bkm9j80WTj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission8/Area_Chair_gHj2" ]
NLDL.org/2025/Conference
2025
metareview: The paper explores the area of fault detection in optical networks using reflectometry data. They utilised conditional Denoising Diffusion Probabilistic Models (cDDPM) to generate synthetic data with high signal-to-noise ratio (SNR) for classes with low-quality samples. They compared the augmented dataset with a baseline dataset by training an anomaly classifier and evaluating its performance on a hold-out dataset with high-quality traces. The domain is rather interesting and the results are promising. The technical novelty is rather limited though. Nevertheless, the reviewers were positive, therefore the paper could be accepted and discussed at the conference. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down confidence: 4: The area chair is confident but not absolutely certain
Bkm9j80WTj
Enhancing Fault Detection in Optical Networks with Conditional Denoising Diffusion Probabilistic Models
[ "Meadhbh Healy", "Thomas Martini Jørgensen" ]
The scarcity of high-quality anomalous data often poses a challenge in establishing effective automated fault detection schemes. This study addresses the issue in the context of fault detection in optical fibers using reflectometry data, where noise can obscure the detection of certain known anomalies. We specifically investigate whether classes containing samples of low quality can be boosted with synthetically generated examples characterized by high signal-to-noise ratio (SNR). Specifically, we employ a conditional Denoising Diffusion Probabilistic Model (cDDPM) to generate synthetic data for such classes. It works by learning the characteristics of high SNRs from anomaly classes that are less frequently affected by significant noise. The boosted dataset is compared with a baseline dataset (without the augmented data) by training an anomaly classifier and measuring the performances on a hold-out dataset populated only with high quality traces for all classes. We observe a significant improved performance (Precision, Recall, and F1 Scores) for the noise affected training classes proving the success of our methods.
[ "Denoising", "Signal Processing", "Anomaly Detection" ]
https://openreview.net/pdf?id=Bkm9j80WTj
https://openreview.net/forum?id=Bkm9j80WTj
fjHINVjqkf
official_review
1,726,850,742,545
Bkm9j80WTj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission8/Reviewer_oThH" ]
NLDL.org/2025/Conference
2025
title: Review for "Enhancing Fault Detection in Optical Networks with Conditional Denoising Diffusion Probabilistic Models" summary: The paper proposes a data augmentation scheme for fault detection in optical networks via conditional denoising diffusion probabilistic models and compares the performance of a classifier trained on this data to one trained on the original data and data generated by a baseline autoencoder model. strengths: The strengths of the paper are: - Diffusion models for one-dimensional data are still fairly rare such that any progress in this direction is appreciated. - The application case of Optical Networks is well-defined and compelling. - The empirical results suggest that data augmentation with the proposed model may outperform several baselines. weaknesses: However, I also see substantial room for improvement. - Key aspects of the experiments are not clear; perhaps because terminology is inconsistently defined. In particular: - What are the 'Upper Bound' and 'Lower Bound' data sets? I assume these refer to Dataset A and B, respectively. - The results for which model are shown in Table 3? - Why is Table 4 labeled 'Per-class performance'? Isn't the per-class performance given in Table 3? - What are 'Original' and 'Noisy' in Table 4? Are these also the Upper and Lower Bound? - Why do 'normal' and 'bad splice' get separate sections but not the other classes? Are these classes particularly important? If so - why? - In terms of interpretation: If Dataset A yields the best results on the validation data set - why generate any other data, at all? What would be the obstacle, in practice, to obtain this training data? How does the proposed model help to alleviate the problem? - Relatedly: The text, at least at some places, seems to suggest that training data ideally contains only high SNR samples - but this does not seem representative of real-world data, such that such a classifier may overfit the training data. What am I overlooking, here? - The denoising autoencoder is characterized as a discriminative model in line 304 of the paper. This strikes me as misleading. Autoencoders are not discriminative models. They might not be generate, either, but I would just cut the term 'discriminative' in this context. - Relatedly: Why isn't a VAE used as a baseline? The architecture could remain the same as for the denoising autoencoder, but it would be substantially simpler to use the model as a generator. confidence: 3 justification: In its current stage, the experimental details strike me as too unclear to be convincing. Nonetheless, the paper may well have a solid core and I am happy to be convinced by a rebuttal. final_rebuttal_confidence: 3 final_rebuttal_justification: The revision clarified my questions and addressed my most important concerns. With these changes, I believe that the paper crossed the acceptance boundary.
Bkm9j80WTj
Enhancing Fault Detection in Optical Networks with Conditional Denoising Diffusion Probabilistic Models
[ "Meadhbh Healy", "Thomas Martini Jørgensen" ]
The scarcity of high-quality anomalous data often poses a challenge in establishing effective automated fault detection schemes. This study addresses the issue in the context of fault detection in optical fibers using reflectometry data, where noise can obscure the detection of certain known anomalies. We specifically investigate whether classes containing samples of low quality can be boosted with synthetically generated examples characterized by high signal-to-noise ratio (SNR). Specifically, we employ a conditional Denoising Diffusion Probabilistic Model (cDDPM) to generate synthetic data for such classes. It works by learning the characteristics of high SNRs from anomaly classes that are less frequently affected by significant noise. The boosted dataset is compared with a baseline dataset (without the augmented data) by training an anomaly classifier and measuring the performances on a hold-out dataset populated only with high quality traces for all classes. We observe a significant improved performance (Precision, Recall, and F1 Scores) for the noise affected training classes proving the success of our methods.
[ "Denoising", "Signal Processing", "Anomaly Detection" ]
https://openreview.net/pdf?id=Bkm9j80WTj
https://openreview.net/forum?id=Bkm9j80WTj
9GVKGlYn8o
official_review
1,728,548,346,486
Bkm9j80WTj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission8/Reviewer_jhSS" ]
NLDL.org/2025/Conference
2025
title: While the authors conduct research on an interesting and relevant topic, the research lacks clarity, making it difficult to understand the relevance of the results. summary: The authors consider the case where data scarcity poses a challenge to establish good defect detection in optical fibers. They propose generating synthetic data with improved SNRs to train better performing anomaly classifiers. strengths: The authors consider a relevant topic from the machine learning and use case point of view. weaknesses: While the manuscript touches on a relevant topic, we consider it should be better structured and, in particular, clearer regarding the experimental side and results obtained. Below, we provide details regarding some improvement opportunities. GENERAL COMMENTS (1)- We would appreciate it if the authors could provide a brief description regarding the meaning of each class and the implications for the optical networks maintenance. Are all errors equally critical? Is it relevant only to identify the defect that took place or that such a defect will likely take place in the future? Furthermore, when presenting the results: (i) are results differences between models statistically significantly different? (ii) what are the practical implications of the results (e.g., detecting some defects very well, not that well some others). (2)- The authors aim to enhance the classifiers' performance by generating synthetic data. (i) how much data do they generate (from each class)? (ii) how does the amount of synthetic data impact the discriminative performance of the model? (iii) how similar is the synthetic data to the original one? (iv) what metric should be used to assess the similarity? (v) could the authors provide visual examples of real and synthetic data?, (vi) do the authors venture some explanation as to why less noisy data results in a better classifier? (3)- Why did the authors choose a holdout dataset and not cross-validation? How was the holdout dataset created? May the authors provide some insights on whether the data distribution between train and test sets is similar or different? (4)- Metrics: (i) we encourage the authors to provide AUC ROC scores for their models to have a threshold-independent estimate of the models' quality. (ii) how did the authors determine the cut-off thresholds when evaluating accuracy/precision/recall/F1? (5)- The research paper aims to enhance the classifiers' performance by generating synthetic data. We encourage the authors to more clearly specify the experiments performed, how these relate to the overall goal, and what the particular hypothesis/goals are motivating that experiment. The authors should report the results in the same structure, showing correspondence between hypotheses and experiments and the experiment outcomes. Furthermore, we invite them to decouple the general methodology section from the experiments performed and the descriptions of the particular models, and provide some figures describing in detail the whole methodology/procedure. Do we use a single classifier? Do we use many of them? Why? (6)- There is a lack of clarity regarding the datasets used to perform the experiments. In particular, the authors mention upper and lower-bound datasets in the Introduction and later introduce datasets 1-4 in subsection 4.1. A figure or table would be helpful to explain more clearly the characteristics and interplay between datasets 1-4 and upper and lower bound datasets: which classes (e.g., "Bad splice") do they have? how many instances? is there a dB segmentation? which ones include synthetic data? what is the role of the cDDPM and cDCAE models? (7)—Section 5.1.1: It is not clear to us what the authors meant regarding the results reported in Table 2. While the caption mentions that accuracy corresponds to performance regarding three training datasets, they report on upper/lower bounds and two models. (8)- The authors devote subsection 5.4 to "Bad splice" - why does this kind of fault merit a whole subsection, while the rest seems to not be reported in detail? (9) Figure 2 introduces three kinds of embeddings: class embeddings, SNR embeddings, and Amp embeddings. We encourage the authors to provide further details on the rationale behind selecting these conditions and how these embeddings are computed and how they relate to the cDDPM and cDCAE models. FIGURES (10)- Figure 2: it seems some arrows are missing from two embedding boxes TABLES (11)- Align numbers to the right and use the same number of decimals. We suggest using four decimals, given many cases reach the same result when considering a two-decimal precision. (12)- Table 4: while the paper is about defect detection, why do the authors report (only) about the Normal class? There seems to be a mismatch between the table and the caption. MINOR COMMENTS (13)- "with four classes having training training" -> "with four classes having training training" (14)- "linear layer has a leakyRelu()" -> "linear layer has a leaky ReLU" (15)- "1.1 Background" -> the subsection is redundant - no other subsections for Introduction exist. confidence: 4 justification: While the approach and use case are relevant, the paper lacks clarity regarding the experimental setup and the results, making it difficult to assess the contribution of using a particular approach to generate synthetic data to enhance the classifier's performance. final_rebuttal_confidence: 4 final_rebuttal_justification: The authors have addressed all of the items and improved the manuscript.
Bkm9j80WTj
Enhancing Fault Detection in Optical Networks with Conditional Denoising Diffusion Probabilistic Models
[ "Meadhbh Healy", "Thomas Martini Jørgensen" ]
The scarcity of high-quality anomalous data often poses a challenge in establishing effective automated fault detection schemes. This study addresses the issue in the context of fault detection in optical fibers using reflectometry data, where noise can obscure the detection of certain known anomalies. We specifically investigate whether classes containing samples of low quality can be boosted with synthetically generated examples characterized by high signal-to-noise ratio (SNR). Specifically, we employ a conditional Denoising Diffusion Probabilistic Model (cDDPM) to generate synthetic data for such classes. It works by learning the characteristics of high SNRs from anomaly classes that are less frequently affected by significant noise. The boosted dataset is compared with a baseline dataset (without the augmented data) by training an anomaly classifier and measuring the performances on a hold-out dataset populated only with high quality traces for all classes. We observe a significant improved performance (Precision, Recall, and F1 Scores) for the noise affected training classes proving the success of our methods.
[ "Denoising", "Signal Processing", "Anomaly Detection" ]
https://openreview.net/pdf?id=Bkm9j80WTj
https://openreview.net/forum?id=Bkm9j80WTj
7bYNlGFOLk
official_review
1,727,347,664,302
Bkm9j80WTj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission8/Reviewer_KPoU" ]
NLDL.org/2025/Conference
2025
title: Interesting application of DDPMs for fault detection in optical fibers summary: This work is focused on the case of fault detection in optical fibers based on reflectometry data. Noise can make it challenging to detect certain anomalies in this data, and the paper proposes to unse generative models to enhance the signal-to-noise (SNR) ratio. Specifically, a conditional denoising diffusion probabilistic model is used to generate synthetic data with improved SNRs. The proposed solution is evaluated on relevant datasets and compared with suitable baselines, with promising results. strengths: 1) The application of denoising diffusion probabilistic models to fault detection in optical networks seems new, and it is interesting to see such models applied with success also in this domain. 2) The methodology is evaluated on relevant datasets with promising results. 3) The writing is mostly clear and easy to follow. weaknesses: 1) The baselines that are reported in the manuscript are suitable. However, it would be beneficial to also include baselines that are not based on neural networks. Many such baselines are available (see e.g. [1]), and would give the reader a better impression of the potential of proposed approach and need for using complex deep learning-based methods. 2) The referencing of the paper could be improved. Many works are mentioned but not properly cited. For example, variational auto encoder, Wasserstein GAN, GRU, LeakyReLU, and Dropout are not properly referenced. Even thought these are widely used methods and components, it is till important to give reference them properly. Furthermore, the references themselves can also be improved. Several citations now reference the pre-print version when published versions exist. This should be updated. [1] Anomaly detection in time series: a comprehensive evaluation. Proceedings of the VLDB Endowment, Volume 15, Issue 9 Pages 1779 - 1797 https://doi.org/10.14778/3538598.3538602. Sebastian Schmidl, Phillip Wenig, Thorsten Papenbrock. confidence: 4 justification: The technical novelty of this paper is not particularly significant. But the application is novel and interesting, and I think it fits the conference well. The paper is mostly well-written and the results are promising. There is potential for improvements both in terms of adding one or more non deep learning-based baselines and improving the use of references, but I do not see these as critical flaws. Therefore, I recommend to accept the paper. final_rebuttal_confidence: 4 final_rebuttal_justification: I think the authors have done a good job in revising the manuscript, and I think the new version is an improvement. Given that my impression was positive even before the revision, I will keep my recommendation from the previous phase.
Bkm9j80WTj
Enhancing Fault Detection in Optical Networks with Conditional Denoising Diffusion Probabilistic Models
[ "Meadhbh Healy", "Thomas Martini Jørgensen" ]
The scarcity of high-quality anomalous data often poses a challenge in establishing effective automated fault detection schemes. This study addresses the issue in the context of fault detection in optical fibers using reflectometry data, where noise can obscure the detection of certain known anomalies. We specifically investigate whether classes containing samples of low quality can be boosted with synthetically generated examples characterized by high signal-to-noise ratio (SNR). Specifically, we employ a conditional Denoising Diffusion Probabilistic Model (cDDPM) to generate synthetic data for such classes. It works by learning the characteristics of high SNRs from anomaly classes that are less frequently affected by significant noise. The boosted dataset is compared with a baseline dataset (without the augmented data) by training an anomaly classifier and measuring the performances on a hold-out dataset populated only with high quality traces for all classes. We observe a significant improved performance (Precision, Recall, and F1 Scores) for the noise affected training classes proving the success of our methods.
[ "Denoising", "Signal Processing", "Anomaly Detection" ]
https://openreview.net/pdf?id=Bkm9j80WTj
https://openreview.net/forum?id=Bkm9j80WTj
1m0UrkS8gW
official_review
1,727,238,490,073
Bkm9j80WTj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission8/Reviewer_H4a2" ]
NLDL.org/2025/Conference
2025
title: Paper has some contribution but presentation and experimental evidence lacks clarity summary: The paper proposes the use of cDDPM (Denoising Diffusion Probablistic Model), i.e., a generative model, to augment a time series data set (i.e., optical time-domain reclectogrammy traces). The model learns signal characteristics from high SNR anomaly data samples, which are less affected by noise. On the resulting data sets a classifier can achieve higher accuracy, precision and recall. strengths: Strengths: - The paper uses publicly available data sets weaknesses: Weaknesses: - While writing and gammar is generally good, the paper is unclear at many points and I had to read it several times to uncover the actual contribution and how everything fits together - There is a lot of unnecessary information and figures that can be omitted in order to gain space for relevant descriptions - The paper discusses a lot of research that seems to be directly competitive to the proposed methods but only one method serves as a baseline (and it is unclear if it is one of the related works) - The experimental results (especially those in 5.1 and 5.2 are not discussed. confidence: 3 justification: Questions/Remarks: - I found it very hard to understand the actual contribution of the paper after reading the introduction. My suggestion is to add a figure in the intro to sketch what the paper is about - Section 2: seems to be unnecessary to me. What I would have lover more would be a formal introduction of the data samples and the prediction tasks, i.e., how the data samples are arranged and used as tensor for computation (and later when explain the loss function this would be beneficial) - Figure 1. Should be a vector graphic - There is a lot of related work in section 3. Why is baseline comparison in the experimental section so limited? - Line 217: "This dataset is comprised of both the real traces from the aforementioned four classes " - which four classes are you referring to? Please make it clear how the data sets are built, the definitions are currently a bit sloppy - Line 280: "Using embeddings creates a learnable representation for each condition" - what does this mean? - Section 4.3.4: I would have liked to see a mathematical definition of the loss function. It is hard to re-implement everything based on the current information - Table 2: show the results for the datasets separately - are there any noticeable things? - Table 2/Section 5.2: what is upper bound and lower bound? How are they computed? In Section 5.3 you talk about the upper bound data set. This however has at no point been defined. - What's the benefit of Figs. 3 and 4? - Some minor points: - line 052 ", i.e." - line 075 training appears twice - line 130 increase[-s-] - line 149 that appears twice - line 159 reference missing - line 177 model[s] - line 227: we design a[n] ML - line 231 is comprised an - word missing - line 233: layer[s] - line 301: Abdelli et.al
8T8QkDsuO9
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned Models
[ "Gabriel Y. Arteaga", "Thomas B. Schön", "Nicolas Pielawski" ]
Uncertainty estimation is a necessary component when implementing AI in high-risk settings, such as autonomous cars, medicine, or insurances. Large Language Models (LLMs) have seen a surge in popularity in recent years, but they are subject to hallucinations, which may cause serious harm in high-risk settings. Despite their success, LLMs are expensive to train and run: they need a large amount of computations and memory, preventing the use of ensembling methods in practice. In this work, we present a novel method that allows for fast and memory-friendly training of LLM ensembles. We show that the resulting ensembles can detect hallucinations and are a viable approach in practice as only one GPU is needed for training and inference.
[ "Large Language Models", "Uncertainty Estimation", "Hallucination Detection", "Memory-Efficient Deep Ensembles" ]
https://openreview.net/pdf?id=8T8QkDsuO9
https://openreview.net/forum?id=8T8QkDsuO9
tYaPviGes4
official_review
1,728,304,451,335
8T8QkDsuO9
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission16/Reviewer_JRCF" ]
NLDL.org/2025/Conference
2025
title: Review for hallucination detection in LLMs. summary: This paper focuses on leveraging an ensemble of Large Language Models (LLM) to generate uncertainty estimates which can be used to detect LLM hallucinations. A chief contribution was the utilisation of Low-Rank Adaptation of Large Language Models (LoRA) to train the LLM ensembles in a memory-efficient and flexible manner. The uncertainty estimates computed were used as features in binary classification models to distinguish between correct and hallucinated responses. The hallucinations were categorised into faithfulness and factual hallucinations to detect potentially harmful predictions for LLM applications in high-risk settings. strengths: The paper's structure is good, it is well-motivated and appears to be correct in its methodology and experiments. The contributions are relevant to address existing issues regarding the faithfulness and reliability of LLMs while also making the training process more computationally efficient. The results are presented with good use of other baseline and alternative frameworks for comparison as well as good evaluation metrics. The details presented in the appendix are also beneficial to highlight specific examples of prompts and responses. weaknesses: In the caption of the presented Figure 1, LoRA if left unabbreviated and as the figure appears before the Introduction section the unabbreviated form is not known to the user and this could be further clarified. Also, it can be further clarified in the caption that V are the fast weight matrices and the pattern behind the B and A vector boxes makes it difficult to read the text on them. In Tables 1 and 2, highlighting the presented method such as with the BatchEnsemble (ours) would improve the clarity. Also specifying how the top-1 accuracies in Table 1 were computed from the five classfiers would be useful. Lastly, the sentence " All models utilize..." in line 301 within Section 4 could be worded to further emphasise the model used was Mistral-7B-instruct-v.02 with pre-trained weights. confidence: 3 justification: Overall well-motivated work with solid contributions which could be used to improve both efficiency and reliability of LLMs. final_rebuttal_confidence: 3 final_rebuttal_justification: Based on revised edition of submission I am happy to accept this submission.
8T8QkDsuO9
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned Models
[ "Gabriel Y. Arteaga", "Thomas B. Schön", "Nicolas Pielawski" ]
Uncertainty estimation is a necessary component when implementing AI in high-risk settings, such as autonomous cars, medicine, or insurances. Large Language Models (LLMs) have seen a surge in popularity in recent years, but they are subject to hallucinations, which may cause serious harm in high-risk settings. Despite their success, LLMs are expensive to train and run: they need a large amount of computations and memory, preventing the use of ensembling methods in practice. In this work, we present a novel method that allows for fast and memory-friendly training of LLM ensembles. We show that the resulting ensembles can detect hallucinations and are a viable approach in practice as only one GPU is needed for training and inference.
[ "Large Language Models", "Uncertainty Estimation", "Hallucination Detection", "Memory-Efficient Deep Ensembles" ]
https://openreview.net/pdf?id=8T8QkDsuO9
https://openreview.net/forum?id=8T8QkDsuO9
o9PdH6MZZR
official_review
1,728,489,379,971
8T8QkDsuO9
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission16/Reviewer_d6y9" ]
NLDL.org/2025/Conference
2025
title: Interesting work in memory-efficient ensembling, but does it work for other models? summary: The article proposes a method to detect language model hallucinations, i.e. responses which disregard instructions or include incorrect information, based on uncertainty estimation. The authors base their approach in information theory, arguing that model uncertainty can be decomposed into epistemic uncertainty (tied to the model's knowledge of the data) and aleatoric uncertainty (representing variation in the data), and that these uncertainties are correlated with model hallucinations. To provide uncertainty estimation for existing pretrained language models, the authors propose augmenting language models with multiple LoRA adapters, which are randomly initialized and then finetuned separately on the target task. The uncertainty is then estimated from the estimated predictive entropy of the adapters. To detect hallucinations, the authors generate a labelled dataset of uncertainty estimates and correctness labels for a set of model prompts and responses. The authors evaluate their uncertainty estimation and hallucination detection on the Mistral-7B-Instruct-0.2 language model, and compare their approach against a baseline prompt-based method with repeated sampling, and the LoRA Ensemble method by Wang, Aitchison and Rudolph. They find that their method performs worse than LoRA Ensemble in classifying factual hallucinations and out-of-distribution examples, but performs best out of the sampled methods when classifying faithfulness hallucinations, achieving 97.8% accuracy on their test set based on SQuAD 2.0. After finetuning all of the evaluated models on questions from the SQuAD and MMLU datasets, the authors also find that their ensemble achieves better accuracy on questions from the respective test sets than the LoRA Ensemble and the original model finetuned by itself. strengths: The proposed approach provides a straightforward and memory-efficient method to approximate a model ensemble for a pretrained language model, and to generate uncertainty estimates from it. The resulting ensemble also improves on the question answering accuracy of the original standalone model after finetuning. The approach is likely also extensible to other parameter-efficient finetuning techniques. The authors present a reasonable hypothesis for why this approach works for faithfulness error detection. Based on the "snowball effect" where pretrained language models "commit" to continuing earlier mistakes, the authors link uncertainty for individual tokens to the model "committing" to a wrong answer. The experimental conditions are well documented, based on publicly available benchmarks and language models, and explicitly describe the dataset processing, making the experiments easier to replicate. weaknesses: The approach is presented as generally applicable to instruction-tuned language models with decoders - however, the experiments only evaluate the approaches on a single pretrained language model. Evaluations with multiple pretrained language models are necessary to establish how the hallucination detection performs with weaker or stronger pretrained models, and could also provide confidence estimates for the reported results. While the experimental conditions themselves are well documented, the parameters for the LoRA ensemble itself - such as the number of adapters and their size - are not given in the article. Since these parameters present a tradeoff between inference time, memory use and the results on the downstream tasks, omitting them from the article makes the results challenging to replicate. Additionally, it is not clear which dataset the training set for the hallucination classifiers is derived from. Finally, I think the paper would be stronger by referencing model editing techniques such as ROME [1], which attempt to change factual and conceptual associations in language models, while retaining their overall question answering capabilities. Ideally, these techniques could generate models with more factual errors but the same instruction following capabilities as the original, allowing experiments to substantiate the authors' hypothesis that faithfulness and factual errors are correlated with the estimated aleatoric and epistemic uncertainty. Additional questions which did not significantly impact the decision: * In the "Uncertainty Estimation" subsection, does $\mathcal{D}$ refer to the overall data distribution? * Since the weight decay of the LoRA Ensemble and noise injection lead to worse predictive results, are there any objectives or mechanisms in the main method which maintain diversity among the ensemble members (avoiding the case where B and A in all LoRA ensemble members go to zero?) * Since the faithfulness error detection is based on unanswerable questions from SQuAD 2.0, including factual questions with definite answers outside the question context, are there faithfulness hallucinations where the model is factually correct? Do the uncertainty estimates reflect this? [1] Meng, K., Bau, D., Andonian, A., & Belinkov, Y. (2022). Locating and editing factual associations in GPT. Advances in Neural Information Processing Systems, 35, 17359-17372. confidence: 4 justification: The paper is overall well written and easy to follow, and presents an easily adaptable and memory-efficient method to provide joint uncertainty and faithfulness error detection for existing pretrained language models. I think this paper could be a very good reference point for further work in the overlap between parameter-efficient finetuning and downstream use of uncertainty estimation. However, the paper omits key details about the ensemble design and training of the LoRA adapters, making it challenging to recreate the results. This also makes it difficult to exactly quantify the benefit to memory efficiency, one of the main stated benefits of the method. Since the method is proposed as a general-purpose uncertainty estimation and hallucination detection method, I also find it a significant weakness to only evaluate the approach on one pretrained model, when evaluations on other models would both bolster confidence in the method, and provide important evidence to corroborate the authors' hypothesis linking errors to aleatoric and epistemic uncertainty. Unfortunately, while I found the paper interesting and the ideas worth exploring further, I am not comfortable accepting it as-is. I would however be comfortable revising my rating to an accept if the ensemble details are included in the revised article. final_rebuttal_confidence: 4 final_rebuttal_justification: The authors' revisions include necessary details about the ensemble setup during their experiments, which support their argument that the LoRA adapters are serving as a memory-efficient approximation of a full model ensemble. Additionally, the authors have clarified key points, and thoroughly addressed the reviewers' questions and suggestions. While I would like a larger evaluation of the approach with different ensemble hyperparameters and base language models, the authors acknowledge this as a direction for future work, and their detailed experimental descriptions would also allow other researchers to perform this evaluation. With these revisions, I am comfortable accepting this paper, and would like to see it presented at the conference.
8T8QkDsuO9
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned Models
[ "Gabriel Y. Arteaga", "Thomas B. Schön", "Nicolas Pielawski" ]
Uncertainty estimation is a necessary component when implementing AI in high-risk settings, such as autonomous cars, medicine, or insurances. Large Language Models (LLMs) have seen a surge in popularity in recent years, but they are subject to hallucinations, which may cause serious harm in high-risk settings. Despite their success, LLMs are expensive to train and run: they need a large amount of computations and memory, preventing the use of ensembling methods in practice. In this work, we present a novel method that allows for fast and memory-friendly training of LLM ensembles. We show that the resulting ensembles can detect hallucinations and are a viable approach in practice as only one GPU is needed for training and inference.
[ "Large Language Models", "Uncertainty Estimation", "Hallucination Detection", "Memory-Efficient Deep Ensembles" ]
https://openreview.net/pdf?id=8T8QkDsuO9
https://openreview.net/forum?id=8T8QkDsuO9
UY0eMIOUHT
meta_review
1,730,553,369,396
8T8QkDsuO9
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission16/Area_Chair_CKcc" ]
NLDL.org/2025/Conference
2025
metareview: The paper aims to detect hallucinations of LLM by estimating the uncertainty of the model. The reviewers raised several questions which were mostly answered by the authors, most of which were about clarification of the text. I recommend accepting the paper for a presentation. recommendation: Accept (Oral) suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down confidence: 4: The area chair is confident but not absolutely certain
8T8QkDsuO9
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned Models
[ "Gabriel Y. Arteaga", "Thomas B. Schön", "Nicolas Pielawski" ]
Uncertainty estimation is a necessary component when implementing AI in high-risk settings, such as autonomous cars, medicine, or insurances. Large Language Models (LLMs) have seen a surge in popularity in recent years, but they are subject to hallucinations, which may cause serious harm in high-risk settings. Despite their success, LLMs are expensive to train and run: they need a large amount of computations and memory, preventing the use of ensembling methods in practice. In this work, we present a novel method that allows for fast and memory-friendly training of LLM ensembles. We show that the resulting ensembles can detect hallucinations and are a viable approach in practice as only one GPU is needed for training and inference.
[ "Large Language Models", "Uncertainty Estimation", "Hallucination Detection", "Memory-Efficient Deep Ensembles" ]
https://openreview.net/pdf?id=8T8QkDsuO9
https://openreview.net/forum?id=8T8QkDsuO9
Q7yWpZoiVZ
decision
1,730,901,555,007
8T8QkDsuO9
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations.
8T8QkDsuO9
Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned Models
[ "Gabriel Y. Arteaga", "Thomas B. Schön", "Nicolas Pielawski" ]
Uncertainty estimation is a necessary component when implementing AI in high-risk settings, such as autonomous cars, medicine, or insurances. Large Language Models (LLMs) have seen a surge in popularity in recent years, but they are subject to hallucinations, which may cause serious harm in high-risk settings. Despite their success, LLMs are expensive to train and run: they need a large amount of computations and memory, preventing the use of ensembling methods in practice. In this work, we present a novel method that allows for fast and memory-friendly training of LLM ensembles. We show that the resulting ensembles can detect hallucinations and are a viable approach in practice as only one GPU is needed for training and inference.
[ "Large Language Models", "Uncertainty Estimation", "Hallucination Detection", "Memory-Efficient Deep Ensembles" ]
https://openreview.net/pdf?id=8T8QkDsuO9
https://openreview.net/forum?id=8T8QkDsuO9
GqVqogYd0g
official_review
1,728,545,098,780
8T8QkDsuO9
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission16/Reviewer_TshU" ]
NLDL.org/2025/Conference
2025
title: Hallucination deteciton in LLMs summary: The method proposed uses entropy to estimate uncertainty in LLM outputs and use this as input in various ensemblems to predict whetehrr an LLM is halluciating or not. strengths: The paper demonstrates usefulness of uncertainty estimates (to some extend) and how best to combine these for hallucination classification in LLMs. There is no evidence in the paper that the obtained results compare to other hallucination detection strategies. weaknesses: The method variations are not compared to other hallucination detection methods (referenced in the paper), only to different strategies for combinning and useing the unceretainty metrics. I recoommend as a minimum to compare to results obtained in those other papers in the discussion/conclusion. confidence: 4 justification: There is a lack of comparison to existing literature on hallucination detection.
7TwvcPAyxX
Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting
[ "Lars Uebbing", "Harald Lykke Joakimsen", "Luigi Tommaso Luppino", "Iver Martinsen", "Andrew McDonald", "Kristoffer Knutsen Wickstrøm", "Sébastien Lefèvre", "Arnt B. Salberg", "Scott Hosking", "Robert Jenssen" ]
With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC). The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients. In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance. A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables. In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events. Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial. We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
[ "deep learning", "explainability", "feature importance", "IceNet", "sea ice concentration (SIC)", "climate" ]
https://openreview.net/pdf?id=7TwvcPAyxX
https://openreview.net/forum?id=7TwvcPAyxX
qOA9A3Z2EF
official_review
1,729,027,273,154
7TwvcPAyxX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission52/Reviewer_Grt6" ]
NLDL.org/2025/Conference
2025
title: Study of IceNet models under saliency-based feature reduction summary: This work investigates the impact of XAI methods in conjunction with IceNet (Andersson et al. 2021), a UNet model for forecasting sea-ice concentration (SIC) levels in the arctic. In particular, the work looks to investigate the impact of saliency based occlusion at the feature level to study the robustness and model performance under limitations of a reduced feature space. The authors base their approach on a gradient based saliency method developed explicitly for the purpose of SIC (Joakimsen et al. 2022), which in turn is largely based on early gradient based saliency via backpropagation (Simonyan et al. 2013). The manuscript makes a strong case for the adaptability of reduced feature models in improving forecast accuracy without compromising on performance, even in scenarios involving anomalous events. strengths: - S1: The study is well motivated, and tackles a pivotal modern problem -- accurate Arctic sea ice forecasting -- which is central to further advances in climate dynamics. The work could potentially have significant societal implications for further research in climate change mitigation and policy making. - S2: The results are succinctly presented, and the ensuing discussion goes "the extra mile" to offer insightful observations, highlighting expected outcomes and anomalies. The discussion provides the reader with an understanding of the dynamics at play for the studied models. - S3: The experimental setup and methodology is detailed, and the method seems reasonable and well motivated, grounded in important previous works in the field. In particular, this reviewer considered the proposed ensembling a nice touch; however, we would have liked to see even more investigations into the robustness of the model with uncertainty metrics for the reported results, which would likely strengthen the paper. - S4: The approach shows clear benefits and practical applications for modelling techniques with explainability and interpretability methods, paving the way for further investigations into the effect of saliency based feature selection in sea-ice concentration forecasts. The use of saliency for feature reduction serve as an example of novel ways XAI methods can be exploited for modelling in interdisciplinary scientific studies. weaknesses: - W1: The basis for saliency underlying the approach by Simonyan et al., and subsequently also for Joakimsen et al. has been shown to be more or less causally independent on the predictions of the model under scrutiny (Adebayo et al. 2018). In this work, the authors demonstrate that the saliency maps produced by gradient based methods demonstrate little change when replacing most of the layers of a convolutional network with randomly initialized weights. While this can be said to be a weakness of the aforementioned works, the reliance on this approach in the current work is also affected by proxy. This reviewer would have liked to seen applications of methods that are more robust in this regard, e.g., GradCAM or occlusion based methods, such as SHAP or LIME, particularly given the occlusion based approach of the study. - W2: Table 1 shows accuracies over what is presumably the full ensemble of 10 models. Given the ensemble, is should be relatively straightforward to estimate the uncertainty of these predictions. This would, in this reviewers opinion, improve the presentation of the results, and provide the reader with an insight into the stability, reliability, and overall significance of the reported results. - W3: The work seems to be heavily based on previous works, and could be presented with more clear delineation as a separate study, particularly in the abstract and introduction. On the first read through, the reader is left wondering as to the exact contributions of the current work. While the work "goes further" than the previous work, the extent of the scope of the current work is not too clear, particularly for researchers outside the niche of SIC forecasting. As it stands, the work is to be seen as an incremental study, as opposed to a significant step forward. - W4: Given the nature of the study, it would be reasonable to ask for a robustness analysis of the model under more challenging perturbations. Given that convolutional models generally tend to be sensitive to perturbations, having estimates of the robustness of the trained models would ensure the reader that the results are not due to anomalies. confidence: 4 justification: The study reveals that the trained IceNet models are able to produce better results with lower lead times under the "reduced" feature space, as well as producing better results with longer lead times under the "minimal" feature space, and the authors discusses the implication for linear trend forecasting (LTF) for sea-ice concentration levels. Moreover, the study finds that the full model is still preferable in cases that exhibit higher deviations. As these feature spaces are reduced using the XAI method following Jacobsen et al., the authors demonstrate that this method has practical advantages in continued studies on sea-ice concentration forecasting, as well as interpretable modelling. While this reviewer has some concerns about the specific methodology of saliency mappings under the approach outlined by Simonyan et al., we hope that this review can serve as motivation for the authors to dive deeper into the robustness (as well as statistical rigour) of the proposed method in future work. As it stands, while the work must be seen as largely incremental, the work comes across as thorough while tackling an important problem. This reviewer therefore recommends the paper be accepted to the conference. final_rebuttal_confidence: 4 final_rebuttal_justification: The authors improved an already well formulated work with the feedback from the review. Our score still stands, and we recommend to accept the paper to the conference.
7TwvcPAyxX
Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting
[ "Lars Uebbing", "Harald Lykke Joakimsen", "Luigi Tommaso Luppino", "Iver Martinsen", "Andrew McDonald", "Kristoffer Knutsen Wickstrøm", "Sébastien Lefèvre", "Arnt B. Salberg", "Scott Hosking", "Robert Jenssen" ]
With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC). The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients. In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance. A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables. In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events. Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial. We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
[ "deep learning", "explainability", "feature importance", "IceNet", "sea ice concentration (SIC)", "climate" ]
https://openreview.net/pdf?id=7TwvcPAyxX
https://openreview.net/forum?id=7TwvcPAyxX
XdgtjDsZ2m
meta_review
1,730,404,108,274
7TwvcPAyxX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission52/Area_Chair_6yVn" ]
NLDL.org/2025/Conference
2025
metareview: Quality: The paper is methodologically sound, grounded in significant prior work, and provides detailed experimental setups and analyses. It introduces a comprehensive evaluation of feature reduction in sea ice forecasting using the IceNet model, examining reduced feature sets in normal and anomalous data scenarios. However, suggestions for improvement include an ablation study to determine specific feature contributions and uncertainty quantification for ensemble results. Pros: - Effective reduction in the feature space, achieving high accuracy on general data while maintaining performance on anomalous events. - Clear motivation and presentation of experiments with thoughtful observations on expected versus observed outcomes. - Practical implications for climate change modeling and policy-making in the Arctic. Cons: - The reliance on saliency-based methods from earlier work (Joakimsen et al., Simonyan et al.) has limitations, as gradient-based saliency maps may lack causality with model predictions. - The study could benefit from using more robust XAI techniques (e.g., GradCAM, SHAP, or LIME) and a robustness analysis under perturbations. - The incremental nature of the study could have been clarified more, especially for readers outside this niche field. Clarity: - The work is well-written with clear motivations and structured analyses, but the introduction and abstract could benefit from a clearer distinction of the novel contributions. Originality: - The work is novel in its application of feature reduction in Arctic sea ice forecasting using XAI-driven methods. However, it heavily builds on previous studies, which may make the contributions seem incremental. Significance: - The research is of moderate significance, primarily valuable for the climate modeling and forecasting community. The exploration of feature reduction in the IceNet model holds practical implications, but further robustness analyses could enhance its broader applicability. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 4: The area chair is confident but not absolutely certain
7TwvcPAyxX
Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting
[ "Lars Uebbing", "Harald Lykke Joakimsen", "Luigi Tommaso Luppino", "Iver Martinsen", "Andrew McDonald", "Kristoffer Knutsen Wickstrøm", "Sébastien Lefèvre", "Arnt B. Salberg", "Scott Hosking", "Robert Jenssen" ]
With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC). The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients. In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance. A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables. In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events. Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial. We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
[ "deep learning", "explainability", "feature importance", "IceNet", "sea ice concentration (SIC)", "climate" ]
https://openreview.net/pdf?id=7TwvcPAyxX
https://openreview.net/forum?id=7TwvcPAyxX
QMBG1UitJq
official_review
1,729,076,573,729
7TwvcPAyxX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission52/Reviewer_ffrQ" ]
NLDL.org/2025/Conference
2025
title: Review of Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting summary: The present paper performs an investigation of feature reduction in the traditional sea ice forecasting method. Considering the state-of-the-art IceNet model, a U-Net architecture for forecasting the Artic region sea ice with 50 features, the authors proposed two new sets of features to apply the same model: one with 21 features and another with just 11 features. They showed that the reduced features had good performance, better than the original set of features when forecasting normal events, but when we consider anomalous events, the use of the original set of features had better performance. strengths: The authors combined the state-of-the-art IceNet model with the work of Joakimsen et al. to select only a few features for sea ice forecasting. They provided two sets of features, selecting which features Joakimsen et al. provided as important using X-AI. Using the three sets of features, they performed three different experiments: the first focusing on the anomalous month of September 2013, the second on all test data, and the third on the 10% most anomalous data. This is important to show how the feature selection behaves in relation to different difficulties in the data. The reduced set of features was good on general data but could not extrapolate for anomalous data, which could only forecast better results when considering all features. weaknesses: The experiments could be extended using an ablation study to better determine which of the 50 features would help forecasting, complementing the X-AI approach from Joakimsen et al. I want to understand the output of the U-Net model. You have the image size (432x432), 6 lead times, and 3 SIC classes? If the percentage can determine the SIC class, why would you need this new 3-value dimension? Please provide a text explaining the model. As the authors showed a comparison between different feature sets, it would be interesting to see the difference in execution time between the experiments, showing the importance of using reduced feature sets in the forecasting. confidence: 5 justification: The submitted paper extends the work of Andersson et al. and Joakimsen et al. by experimenting with different sets of features for forecasting sea ice. They performed forecasting with different objectives, showing the strengths and weaknesses of the proposed feature sets for forecasting anomalous and well-behaved data. As they provided a good set of experiments to show their proposed approach, I recommend it to be accepted.
7TwvcPAyxX
Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting
[ "Lars Uebbing", "Harald Lykke Joakimsen", "Luigi Tommaso Luppino", "Iver Martinsen", "Andrew McDonald", "Kristoffer Knutsen Wickstrøm", "Sébastien Lefèvre", "Arnt B. Salberg", "Scott Hosking", "Robert Jenssen" ]
With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC). The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients. In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance. A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables. In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events. Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial. We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
[ "deep learning", "explainability", "feature importance", "IceNet", "sea ice concentration (SIC)", "climate" ]
https://openreview.net/pdf?id=7TwvcPAyxX
https://openreview.net/forum?id=7TwvcPAyxX
E9avkJpiIM
decision
1,730,901,556,725
7TwvcPAyxX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: Given the AC positive recommendation, we recommend an oral and a poster presentation given the AC and reviewers recommendations.
7TwvcPAyxX
Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting
[ "Lars Uebbing", "Harald Lykke Joakimsen", "Luigi Tommaso Luppino", "Iver Martinsen", "Andrew McDonald", "Kristoffer Knutsen Wickstrøm", "Sébastien Lefèvre", "Arnt B. Salberg", "Scott Hosking", "Robert Jenssen" ]
With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC). The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients. In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance. A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables. In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events. Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial. We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
[ "deep learning", "explainability", "feature importance", "IceNet", "sea ice concentration (SIC)", "climate" ]
https://openreview.net/pdf?id=7TwvcPAyxX
https://openreview.net/forum?id=7TwvcPAyxX
CA4RDqq2xd
official_review
1,728,036,018,208
7TwvcPAyxX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission52/Reviewer_9b9J" ]
NLDL.org/2025/Conference
2025
title: Sound manuscript with valuable evaluations and results for the machine learning sea ice coverage prediction audience summary: The manuscript presents a study that expands on the previously published machine learning model IceNet for sea ice concentration forecasts ranging from one through six months lead time in the northern hemisphere. Results suggest that a meaningful reduction of the number of input features (based on their importance score) can improve the forecast quality under normal conditions. In extreme situations and anomalies, though, the full set of input features results in an improved forecasts of sea ice cover compared to when using a reduced or minimal set of input features. strengths: ### Reliability Multiple seeds ensure the reproducibility and reliability of the results. In the figures, though, it would be great to see the standard deviation of the 10 different models (as error bars or as shaded area surrounding the mean line). ### Soundness and Approachability The manuscript is sound and clearly written. It outlines well the research goals and how they are obtained. Results are well underlined with numerous plots, which are interpreted concisely. weaknesses: ### Structure The manuscript would benefit from a structural rework. For example, the transition from Methodology to Results appears somewhat abrupt and the Methods section seems more like Related Work or Foundations. I'd thus suggest to rename section 2 into Related Work and Methodology (or to create two separate sections) and outline the manipulations or modifications this manuscript adds to prior work (such as currently contained at the beginning of the Results section). Also, the Methods section could be subsectioned to outline the conceptualizations of the two core aspects of this work: (a) retraining IceNet with a reduced set of features, and (b) investigating IceNet's generalization performance. The Results section could be subsectioned accordingly. ### Clarity - Can authors please include information about how IceNet generates predictions out to 6 months? That is, does it run autoregressively, are separate models trained for each lead time, or does the model generate all outputs at once? - Also, it is unclear how the Linear Trend Forecast models is designed and where the data comes from. In this vein, lines 104-105 could be extended to contain concrete information about the satellites that recored the data, where it is available, and how it has been preprocesses. - Figures could benefit uppon inclusion of a grid. Once done, concrete values can be extracted from the Figures and Table 1 could be converted into a figure similar to Figure 4 to improve the presentation of results and make it more coherent. - Does Figure 5 contain only extreme events (as suggested in lines 298-302)? This might be emphasized around lines 286-288 and also added to the figure caption. If Figure 5 does not only contain extreme events, can the authors mark extreme events explicitly? ### Minor remarks and typos - Line 143 remove one "the" - In Line 167, add "(land masks)" to the text again to repeat what is meant with metadata. - When talking about less ensemble members compared to Andersson et al., please add the number of ensembles used there. - Missing "s" at "drop" in line 316? confidence: 4 justification: Even though the manuscript can be improved in various ways, it contains valuable evaluations that are worth sharing with the ML and sea ice research community. In particular, this research reveals the value of considering a full set of input variables when aiming for capturing anomalous events with higher accuracy. To my understanding, these results differ from previous work which suggested the use of a minimal set of input variables, even for extreme events. The manuscript presented here, though, reveals that the conclusions in the former study might wrongly been drawn from a single example. When repeating the analysis with a statisticially meaningful number of extreme events, the pattern reverts and suggests the employment of many input features. My recommendations for revisions do not ask for additional analyses but ask for more information and for a reordering of the section, which I consider solvable in the rebuttal period. Thus, I recommend to accept this article for the conference. final_rebuttal_confidence: 4 final_rebuttal_justification: The authors carefully addressed and solved my concerns. During the rebuttal, one downside became clear, namely a large overlap of standard deviations for different configurations. This weakens the relevance of different input configurations. Nevertheless, the analyses and results are worth sharing with the community.
7TwvcPAyxX
Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting
[ "Lars Uebbing", "Harald Lykke Joakimsen", "Luigi Tommaso Luppino", "Iver Martinsen", "Andrew McDonald", "Kristoffer Knutsen Wickstrøm", "Sébastien Lefèvre", "Arnt B. Salberg", "Scott Hosking", "Robert Jenssen" ]
With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC). The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients. In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance. A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables. In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events. Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial. We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
[ "deep learning", "explainability", "feature importance", "IceNet", "sea ice concentration (SIC)", "climate" ]
https://openreview.net/pdf?id=7TwvcPAyxX
https://openreview.net/forum?id=7TwvcPAyxX
7JzF6fH1St
official_review
1,729,019,853,834
7TwvcPAyxX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission52/Reviewer_hzxt" ]
NLDL.org/2025/Conference
2025
title: Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting summary: This work presents a novel model applied to climate forecasting. They reduced the number of features and preserve the performance of the model. Results exhibit how the features reduction improve the model in several cases. strengths: Results exhibit how the features reduction improve the model in several cases. weaknesses: Authors claim that "Our results showed that the models with fewer features generally provide higher accuracies in forecasts for all lead times" in the conclusion. Perhaps, it is not completely true as is evident in the results for extreme events. I recommend change this kind of sentences. Authors could also explain how the features reduction impact the performance in terms of computing. confidence: 5 justification: Results exhibit how the features reduction improve the model in several cases.
7TwvcPAyxX
Investigating the Impact of Feature Reduction for Deep Learning-based Seasonal Sea Ice Forecasting
[ "Lars Uebbing", "Harald Lykke Joakimsen", "Luigi Tommaso Luppino", "Iver Martinsen", "Andrew McDonald", "Kristoffer Knutsen Wickstrøm", "Sébastien Lefèvre", "Arnt B. Salberg", "Scott Hosking", "Robert Jenssen" ]
With the state-of-the-art IceNet model, deep learning has contributed to an important aspect of climate research by leveraging a range of climate inputs to provide accurate forecasts of Arctic sea ice concentration (SIC). The deep learning subfield of eXplainable AI (XAI) has gained enormous attention in order to gauge feature importance of neural networks, for instance by leveraging network gradients. In recent work, an XAI study of the IceNet was conducted, using gradient saliency maps to interrogate its feature importance. A majority of XAI studies provide information about feature importance as revealed by the XAI method, but rarely provide thorough analysis of effects from reducing the number of input variables. In this paper, we train versions of the IceNet with drastically reduced numbers of input features according to results of XAI and investigate the effects on the sea ice predictions, on average and with respect to specific events. Our results provide evidence that the model generally performs better when less features are used, but in case of anomalous events, a larger number of features is beneficial. We believe our thorough study of the IceNet in terms of feature importance revealed by XAI may give inspiration for other deep learning-based problem scenarios and application domains.
[ "deep learning", "explainability", "feature importance", "IceNet", "sea ice concentration (SIC)", "climate" ]
https://openreview.net/pdf?id=7TwvcPAyxX
https://openreview.net/forum?id=7TwvcPAyxX
3izY1MZi0p
official_review
1,728,440,011,100
7TwvcPAyxX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission52/Reviewer_HRXG" ]
NLDL.org/2025/Conference
2025
title: Paper on relevant topic however more experimentation are needed summary: The paper addresses what is the impact of reducing input features to predict sea ice concentration (SIC) and seeking some explainability on how the features impact the prediction. The paper presents a short description of the problem they authors addressed. In my understating, the paper does not present a new technique/approach to solve a problem: it only tries to improve previous results by using different combinations of the input features. This could be interesting for people in the climate-related field. strengths: Tackle a relevant topic. Good accuracy in the predictions. weaknesses: Lack of novelty. Lack of technical details: What are the bases to define the reduced and minimal feature setups (Sec. 3). Lack of information on why the topic is important, what are the current issues, and how the authors's contribution provide a solution to the problem. confidence: 4 justification: More experimentation is required. More details on the kind of problem they try to solve must be provided. What is the relevance of the work within the field.
43B1iDOq6l
Efficient learning of molecule properties with Graph Neural Networks and 3D molecule features
[]
Graph Neural Networks (GNNs) have emerged as a powerful tool in predicting molecular properties based on structural data. While GNNs excel in identifying local patterns within molecules, their ability to capture global properties remains limited due to inherent structural challenges such as oversmoothing. We introduce an innovative GNN-based model that integrates global 3D molecular features with standard graph representations to enhance the prediction of molecular properties. The proposed model is evaluated using benchmark datasets ESOL and FreeSolv and it outperforms existing models. It demonstrates the crucial benefit of giving GNN models easy access to global information about the graph, in the context of applications to chemistry. Additionally, the model's architecture allows for efficient training with relatively modest computational resources, making it practical for widespread application.
[ "Graph neural networks", "molecules", "graphs", "deep learning" ]
https://openreview.net/pdf?id=43B1iDOq6l
https://openreview.net/forum?id=43B1iDOq6l
nk8Gm7HiYq
decision
1,730,901,555,639
43B1iDOq6l
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
43B1iDOq6l
Efficient learning of molecule properties with Graph Neural Networks and 3D molecule features
[]
Graph Neural Networks (GNNs) have emerged as a powerful tool in predicting molecular properties based on structural data. While GNNs excel in identifying local patterns within molecules, their ability to capture global properties remains limited due to inherent structural challenges such as oversmoothing. We introduce an innovative GNN-based model that integrates global 3D molecular features with standard graph representations to enhance the prediction of molecular properties. The proposed model is evaluated using benchmark datasets ESOL and FreeSolv and it outperforms existing models. It demonstrates the crucial benefit of giving GNN models easy access to global information about the graph, in the context of applications to chemistry. Additionally, the model's architecture allows for efficient training with relatively modest computational resources, making it practical for widespread application.
[ "Graph neural networks", "molecules", "graphs", "deep learning" ]
https://openreview.net/pdf?id=43B1iDOq6l
https://openreview.net/forum?id=43B1iDOq6l
lZshayPVOQ
official_review
1,727,970,533,823
43B1iDOq6l
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission27/Reviewer_q17A" ]
NLDL.org/2025/Conference
2025
title: Efficient learning of molecule properties with Graph Neural Networks and 3D molecule features summary: This work aims to improve performance on two molecular benchmark tasks, FREESOLV and ESOL, using selected global molecular features and a unique single-node-output approach from a graph attention network. The results presented on these two datasets are very strong compared to the included benchmarks. strengths: - The results presented in the paper are very strong - Previous work is well detailed, although more could be said about representation learning with GNNs for molecular properties (GraphCL, InfoGraph, etc.), and the GATConvs used - Strong detail is given on datasets and features - The source (paperswithcode) is given for benchmarks is given, which improves my trust in the benchmark selection process - The single-node-output is unique and qualitatively well motivated weaknesses: - The range of datasets used, only two, is limited. - The Open Graph Benchmark (OGB) is the usual source for the benchmarks used - but for molecular graph property prediction, it includes many more datasets than used in this work. - Several other works have used 3D atom positions for these tasks (see References below), so the claims about novelty could be toned down. The improvement in performance is significant enough without drastic novelty in your method. - Error bounds are not produced for main tables, including for benchmark models - More detail should be given on hyper-parameters and feature selection - Results with pooling outputs should be included alongside benchmarks or with the ablation study - The standard scaffold splits for the datasets are not used - The style of the paper is closer to that of a masters dissertation than a peer-reviewed publication (for example the network architecture schematic is not needed). I'd recommend rewriting with an eye to providing necessary and useful detail on the experimental methodology and existing works used, for example hyperparameter selection and GATConvs. **Questions** These are questions about the whole work, not just the weaknesses above. - Why only use these two datasets? - How did you determine which molecular properties to use? - How did you determine your hyperparameters? What about other training details? - What is the "most peripheral node"? How is this determined, and how do results differ when using other nodes? References: Liu, S., Wang, H., Liu, W., Lasenby, J., Guo, H., and Tang, J., **“Pre-training Molecular Graph Representation with 3D Geometry”**, <i>arXiv e-prints</i>, Art. no. arXiv:2110.07728, 2021. doi:10.48550/arXiv.2110.07728. *Stärk, H., Beaini, D., Corso, G., Tossou, P., Dallago, C., Günnemann, S. &amp; Lió, P.*. (2022). **3D Infomax improves GNNs for Molecular Property Prediction.** <i>Proceedings of the 39th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 162:20479-20502 Available from https://proceedings.mlr.press/v162/stark22a.html. confidence: 3 justification: This paper presents a large improvement in performance over two molecular benchmark datasets. Several steps are unique and interesting, in particular the single-node readout instead of pooling, which is well motivated. However only two datasets are used, and further, no error bounds are given, so statistic significance cannot be properly attributed. Additionally, despite the claims in the related work of the paper, other works have included 3D global features alongside local features. This means that in the absence of further benchmarks, the paper is essentially "Using a single output node and specific global features improves performance on two datasets". This is a valid contribution, but without further detail on experimental design and error bounds, I cannot recommend publication.
43B1iDOq6l
Efficient learning of molecule properties with Graph Neural Networks and 3D molecule features
[]
Graph Neural Networks (GNNs) have emerged as a powerful tool in predicting molecular properties based on structural data. While GNNs excel in identifying local patterns within molecules, their ability to capture global properties remains limited due to inherent structural challenges such as oversmoothing. We introduce an innovative GNN-based model that integrates global 3D molecular features with standard graph representations to enhance the prediction of molecular properties. The proposed model is evaluated using benchmark datasets ESOL and FreeSolv and it outperforms existing models. It demonstrates the crucial benefit of giving GNN models easy access to global information about the graph, in the context of applications to chemistry. Additionally, the model's architecture allows for efficient training with relatively modest computational resources, making it practical for widespread application.
[ "Graph neural networks", "molecules", "graphs", "deep learning" ]
https://openreview.net/pdf?id=43B1iDOq6l
https://openreview.net/forum?id=43B1iDOq6l
ZBT4MYEccd
official_review
1,728,419,655,240
43B1iDOq6l
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission27/Reviewer_xZRb" ]
NLDL.org/2025/Conference
2025
title: Review summary: The paper proposes TChemGNN, a GNN to predict molecular properties from their graph representation. TChemGNN integrates local and global 3D molecular features into the node features. It is evaluated on the ESOL and FreeSolv benchmark, where it outperforms existing models. strengths: The only part that I find innovative and surprising is the fact that no pooling is performed, but rather only the feature of a single node provides the signal for the whole graph. However, this part is not investigated at all, which in my opinion is a shame. weaknesses: I'm confused with the claims made in this paper. First, they integrate 3D and molecular features, as it was done years ago in models like SchNet or DimeNet(++). Therefore, the paper does not present anything "innovative", as the authors claim in the abstract. Second, the title of this paper hints at its efficiency, which is however never evaluated in the paper. The authors never clarify _in what respect_ this model is more efficient. For example, GAT is among the slower graph convolutions, replacing it with GIN should make this architecture even more efficient. The model has only 13k parameters, but there is no chance to know what would happen with more parameters. In summary, the innovation and efficiency claims are unsubstantiated and the experimental part is lacking. confidence: 4 justification: I suggest to resubmit this paper when it's ready and when its claims are better substantiated. It would also be interesting to expand on the "no-pooling" finding, which in my opinion is the most interesting insight in this paper.
43B1iDOq6l
Efficient learning of molecule properties with Graph Neural Networks and 3D molecule features
[]
Graph Neural Networks (GNNs) have emerged as a powerful tool in predicting molecular properties based on structural data. While GNNs excel in identifying local patterns within molecules, their ability to capture global properties remains limited due to inherent structural challenges such as oversmoothing. We introduce an innovative GNN-based model that integrates global 3D molecular features with standard graph representations to enhance the prediction of molecular properties. The proposed model is evaluated using benchmark datasets ESOL and FreeSolv and it outperforms existing models. It demonstrates the crucial benefit of giving GNN models easy access to global information about the graph, in the context of applications to chemistry. Additionally, the model's architecture allows for efficient training with relatively modest computational resources, making it practical for widespread application.
[ "Graph neural networks", "molecules", "graphs", "deep learning" ]
https://openreview.net/pdf?id=43B1iDOq6l
https://openreview.net/forum?id=43B1iDOq6l
WlqX5jhQGZ
meta_review
1,730,561,489,460
43B1iDOq6l
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission27/Area_Chair_7jcw" ]
NLDL.org/2025/Conference
2025
metareview: The paper proposes TChemGNN, a graph neural network (GNN) model for predicting molecular properties by integrating both local and global 3D molecular features. The suggested approach outperforms the existing models on the ESOL and FreeSolv datasets, with the main innovation highlighted as a no-pooling design where a single node's features provide the graph's signal. However, the reviewers raise several concerns that limit the paper's suitability for acceptance in its current form. Strengths: 1) Performance: All reviewers noted that TChemGNN demonstrates strong results on the chosen benchmarks, with effective integration of local and global 3D molecular features. 2) No-Pooling Design: The no-pooling approach, where a single node readout is used instead of pooling, was seen as innovative and promising by multiple reviewers. Weaknesses: 1) Reviewers pointed out that integrating 3D features is not novel, with similar techniques seen in models like SchNet and DimeNet++. Consequently, they suggested that the authors should moderate their claims of innovation on this part. 2) The paper's reliance on only two datasets (ESOL and FreeSolv) was considered insufficient to support generalizability. Reviewers recommended including more datasets from the Open Graph Benchmark (OGB) or others used in molecular graph property prediction. 3) Reviewers noted that claims about efficiency and novelty were not sufficiently backed by evidence. In particular, comparisons with other GNN architectures lacked tuning, making it difficult to establish the model's performance and efficiency relative to the state-of-the-art. 4) While the authors aimed to make the paper accessible to non-AI researchers (e.g., chemists), the reviewers felt that a machine learning venue like NLDL requires a more targeted AI-focused style, with greater emphasis on methodological rigor. In sum, reviewers ( with a high confidence average of 3.66) suggest resubmission with more robust baselines, error bounds, and detailed hyperparameter choices. They also recommend further validation of efficiency claims and discussion of alternative molecular representations. In reviewing the paper and the reviewer's feedback, I see promise in this work, particularly in the innovative no-pooling design. However, as all reviewers highlighted, the paper would benefit from stronger evidence, more robust comparisons, and clearer contextualization within the current ML literature. Since the authors did not submit any revisions, and no reviewers support the work for even weak acceptance, the paper may not be ready for acceptance in its current format. recommendation: Reject suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 4: The area chair is confident but not absolutely certain
43B1iDOq6l
Efficient learning of molecule properties with Graph Neural Networks and 3D molecule features
[]
Graph Neural Networks (GNNs) have emerged as a powerful tool in predicting molecular properties based on structural data. While GNNs excel in identifying local patterns within molecules, their ability to capture global properties remains limited due to inherent structural challenges such as oversmoothing. We introduce an innovative GNN-based model that integrates global 3D molecular features with standard graph representations to enhance the prediction of molecular properties. The proposed model is evaluated using benchmark datasets ESOL and FreeSolv and it outperforms existing models. It demonstrates the crucial benefit of giving GNN models easy access to global information about the graph, in the context of applications to chemistry. Additionally, the model's architecture allows for efficient training with relatively modest computational resources, making it practical for widespread application.
[ "Graph neural networks", "molecules", "graphs", "deep learning" ]
https://openreview.net/pdf?id=43B1iDOq6l
https://openreview.net/forum?id=43B1iDOq6l
97cXktTeNp
official_review
1,727,564,851,560
43B1iDOq6l
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission27/Reviewer_o75a" ]
NLDL.org/2025/Conference
2025
title: TChemGNN Proposed as New Property Predictor for Molecules summary: The paper proposes a new GNN architecture called TChemGNN for molecular property modeling. In Section 1, the paper outlines some background related to using GNN for molecular property prediction and in Section 2, the paper discusses some prior work. Section 3 introduces the proposed TChemGNN model, which borrows the architecture from prior work and adds global features to the input representation. Section 4 describes the results on the ESOL dataset and Section 5 describes the results on the FreeSolv dataset. Generally both experiments suggest that TChemGNN achieves lower error values compared to the baselines studied. Section 6 provides a discussion and conclusion that summarizes the main results. strengths: * The paper provides many relevant details related to the proposed architecture, the input space and the experiments conducted. * The experiments presented generally show lower error for the proposed method. weaknesses: * The paper does not provide discussion of relevant prior work pertaining to GNNs that use global features for modeling [1] [2]. * The paper does not describe relevant details of the datasets used for the experiments (e.g., their size) that would be relevant for understanding some of the claims. * The claims about the proposed TChemGNN being fast to train are not supported with evidence and could be confounded with the size of the dataset. * Since the paper claims improvement in features for representation learning, I would also recommend discussing different molecular representations, such as SELFIES [3], Group SELFIES [4] and SAFE [5]. [1] Chen C, Ye W, Zuo Y, Zheng C, Ong SP. Graph networks as a universal machine learning framework for molecules and crystals. Chemistry of Materials. 2019 Apr 10;31(9):3564-72. [2] Chen C, Ong SP. A universal graph deep learning interatomic potential for the periodic table. Nature Computational Science. 2022 Nov;2(11):718-28. [3] Krenn M, Häse F, Nigam A, Friederich P, Aspuru-Guzik A. Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation. Machine Learning: Science and Technology. 2020 Oct 28;1(4):045024. [4] Cheng AH, Cai A, Miret S, Malkomes G, Phielipp M, Aspuru-Guzik A. Group SELFIES: a robust fragment-based molecular string representation. Digital Discovery. 2023;2(3):748-58. [5] Noutahi E, Gabellini C, Craig M, Lim JS, Tossou P. Gotta be SAFE: a new framework for molecular design. Digital Discovery. 2024;3(4):796-804. confidence: 4 justification: While the paper provides a potentially interesting and useful addition for molecular modeling, I think that the paper needs to properly justify the experiments performed (i.e., why those particular datasets were chosen) and provide more evidence for some of their claims (e.g., compute details for "light model training"). On top of that, it would be useful for the authors to perform an ablation with their proposed representation on other model architectures.
3FswRo4Lhj
SPARDACUS SafetyCage: A new misclassification detector
[ "Pål Vegard Johnsen", "Filippo Remonato", "Shawn Benedict", "Albert Ndur-Osei" ]
Given the increasing adoption of machine learning techniques in society and industry, it is important to put procedures in place that can infer and signal whether the prediction of an ML model may be unreliable. This is not only relevant for ML specialists, but also for laypersons who may be end-users. In this work, we present a new method for flagging possible misclassifications from a feed-forward neural network in a general multi-class problem, called SPARDA-enabled Classification Uncertainty Scorer (SPARDACUS). For each class and layer, the probability distribution functions of the activations for both correctly and wrongly classified samples are recorded. Using a Sparse Difference Analysis (SPARDA) approach, an optimal projection along the direction maximizing the Wasserstein distance enables $p$-value computations to confirm or reject the class prediction. Importantly, while most existing methods act on the output layer only, our method can in addition be applied on the hidden layers in the neural network, thus being useful in applications, such as feature extraction, that necessarily exploit the intermediate (hidden) layers. We test our method on both a well-performing and under-performing classifier, on different datasets, and compare with other previously published approaches. Notably, while achieving performance on par with two state-of-the-art-level methods, we significantly extend in flexibility and applicability. We further find, for the models and datasets chosen, that the output layer is indeed the most valuable for misclassification detection, and adding information from previous layers does not necessarily improve performance in such cases.
[ "Misclassification detection", "uncertainty estimation", "Wasserstein distance", "hypothesis tests" ]
https://openreview.net/pdf?id=3FswRo4Lhj
https://openreview.net/forum?id=3FswRo4Lhj
droHEw0Ek2
official_review
1,729,028,423,709
3FswRo4Lhj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission29/Reviewer_CqCn" ]
NLDL.org/2025/Conference
2025
title: Review of SPARDACUS SafetyCage: A new misclassification detector summary: The authors propose SPARDACUS SafetyCage, a new method for predicting when a trained neural network will perform a missclassification _before it happens_. The method builds on SafetyCage by not only leveraging correctly predicted samples, but also incorrectly predicted samples to estimate two PDFs on which the authors use classical statistical tests to predict if new samples will be correctly or incorrectly predicted. strengths: * The empirical results appear to show that the proposed method slightly outperforms previously proposed methods. * The research direction is interesting and useful for real life scenarios. weaknesses: Although the proposed method shows overall improvements, I find that the work currently has a number of overstatements, conflicting statements and is written in a way that makes it difficult to understand the method/process of the method. I elaborate on this here and refer to the bottom of the weaknesses for minor comments. * One particular conflicting statement which is highly important is that the authors in the abstract write "Importantly, while most existing methods act on the output layer only, our method can be applied on any layer in the neural network, thus being useful in applications, such as feature extraction, that necessarily exploit the intermediate (hidden) layers." but then in the introduction write that SafetyCage works on all layers. Could the authors please clarify which of these are true? Additionally, could the authors comment on why they think this is important when they empirically show that only using the output layer performs the best anyways? * In general I find the methods section highly difficult to follow, due to the fact that the method is largely described with words rather than mathematics. I think the method section would significantly benefit from including an algorithm figure for example or similar. If I was to implement this method myself, I would find it difficult to do so as the "recipe" is written throughout the text rather than concretely in an algorithm/method figure. * The authors never clearly describe on which data they estimate the PDFs $f_{i,l}, g_{i,l}$. Could the authors please clarify this? * The authors have some very strong statements in the conclusion such as: * "SPARDACUS has the greatest potential improvement" for which I do not see the reasoning? * " ... it is easy to imagine applications classifying complex inputs (say, sound signals), where an NN is used as a feature-extractor, mapping the input signal to some embedding space, which only later is taken by a specialised model for classification..." but do the authors have any reason to believe this should work based on their hidden layer tests? * The authors write "hence the SPARDACUS has greater generalization capabilities", but again I do not see why this should necessarily be the case just because the method can have its projections and PDFs updated. Could the authors comment on this? * Also, even though SPARDACUS improves on previously proposed methods I find the final sentence quite overstated "SPARDACUS is in sum a very powerful, extremely flexible state-of-the-art approach" considering the extent of the empirical experiments. I am not saying the empirical results are not significant, but the generalization and proof of working on other modalities, datasets, model sizes etc. is still lacking (which is fine for a first version of a paper, just not for such strong statements in my opinion). * Line 398-399 the authors say "but it is clear that this need not be the case for all applications", but again this is not really clear to me, especially based on the empirical results. * In general I would also say that it is highly preferred that the authors ran multiple seeds when training the models and evaluating the methods in order to get some standard errors on the means presented in Table 1. Currently the methods are very comparable, and it is hard to say if these methods are statistically significantly different (at least subsets of them) - this is especially highlighted by the fact that the results from the original SafetyCage paper are quite different from those reported in this paper. I believe the authors mention that they have the same experimental setup as in the SafetyCage paper, yet the metrics are different, which could indicate that there are somewhat large variations in method performance (although I could have misunderstood this). Minor comments * The sentence starting with "Nonetheless.." in 058-061 is unclear to me what the authors are trying to say. * The authors write mixture Gaussian distribution in 183, but I believe this should be "Gaussian mixture distribution" or "mixture of Gaussians distribution". confidence: 3 justification: Although the proposed method is novel, there are two key points that are the reason for my score. Firstly, are the previously mentioned overstatements and conflicting points in the paper, which could be addressed relatively easily. More importantly are the considerations on statistical significance of the results which in my opinion only could be addressed by running additional seeds to get standard errors on the different methods. In my opinion this is not an overly large ask considering the scale of these experiments. Should these two main issues be addressed, I would be willing to raise my score.
3FswRo4Lhj
SPARDACUS SafetyCage: A new misclassification detector
[ "Pål Vegard Johnsen", "Filippo Remonato", "Shawn Benedict", "Albert Ndur-Osei" ]
Given the increasing adoption of machine learning techniques in society and industry, it is important to put procedures in place that can infer and signal whether the prediction of an ML model may be unreliable. This is not only relevant for ML specialists, but also for laypersons who may be end-users. In this work, we present a new method for flagging possible misclassifications from a feed-forward neural network in a general multi-class problem, called SPARDA-enabled Classification Uncertainty Scorer (SPARDACUS). For each class and layer, the probability distribution functions of the activations for both correctly and wrongly classified samples are recorded. Using a Sparse Difference Analysis (SPARDA) approach, an optimal projection along the direction maximizing the Wasserstein distance enables $p$-value computations to confirm or reject the class prediction. Importantly, while most existing methods act on the output layer only, our method can in addition be applied on the hidden layers in the neural network, thus being useful in applications, such as feature extraction, that necessarily exploit the intermediate (hidden) layers. We test our method on both a well-performing and under-performing classifier, on different datasets, and compare with other previously published approaches. Notably, while achieving performance on par with two state-of-the-art-level methods, we significantly extend in flexibility and applicability. We further find, for the models and datasets chosen, that the output layer is indeed the most valuable for misclassification detection, and adding information from previous layers does not necessarily improve performance in such cases.
[ "Misclassification detection", "uncertainty estimation", "Wasserstein distance", "hypothesis tests" ]
https://openreview.net/pdf?id=3FswRo4Lhj
https://openreview.net/forum?id=3FswRo4Lhj
abIrUvYuXf
meta_review
1,730,125,210,752
3FswRo4Lhj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission29/Area_Chair_4eJ2" ]
NLDL.org/2025/Conference
2025
metareview: This paper addresses the problem of reliability and misclassification detection in machine learning. The reviewers appreciate the novelty, the writing, and the research direction. After the review and rebuttal stage, the reviewers reached a consensus to accept the paper. The reviewers have some final questions and concerns that could be addressed. For example, one novelty claim is that the proposed method can be used in all layer, but this also seems the case for SafetyCage, hence some clarification is required. There are also shared concerns about hyperparameter tuning. The AC agrees with the positive assessment and encourages the authors to update the paper based on the pointers by the reviewers. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 3: The area chair is somewhat confident
3FswRo4Lhj
SPARDACUS SafetyCage: A new misclassification detector
[ "Pål Vegard Johnsen", "Filippo Remonato", "Shawn Benedict", "Albert Ndur-Osei" ]
Given the increasing adoption of machine learning techniques in society and industry, it is important to put procedures in place that can infer and signal whether the prediction of an ML model may be unreliable. This is not only relevant for ML specialists, but also for laypersons who may be end-users. In this work, we present a new method for flagging possible misclassifications from a feed-forward neural network in a general multi-class problem, called SPARDA-enabled Classification Uncertainty Scorer (SPARDACUS). For each class and layer, the probability distribution functions of the activations for both correctly and wrongly classified samples are recorded. Using a Sparse Difference Analysis (SPARDA) approach, an optimal projection along the direction maximizing the Wasserstein distance enables $p$-value computations to confirm or reject the class prediction. Importantly, while most existing methods act on the output layer only, our method can in addition be applied on the hidden layers in the neural network, thus being useful in applications, such as feature extraction, that necessarily exploit the intermediate (hidden) layers. We test our method on both a well-performing and under-performing classifier, on different datasets, and compare with other previously published approaches. Notably, while achieving performance on par with two state-of-the-art-level methods, we significantly extend in flexibility and applicability. We further find, for the models and datasets chosen, that the output layer is indeed the most valuable for misclassification detection, and adding information from previous layers does not necessarily improve performance in such cases.
[ "Misclassification detection", "uncertainty estimation", "Wasserstein distance", "hypothesis tests" ]
https://openreview.net/pdf?id=3FswRo4Lhj
https://openreview.net/forum?id=3FswRo4Lhj
R5cWIDrdKI
official_review
1,728,520,925,074
3FswRo4Lhj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission29/Reviewer_eUwr" ]
NLDL.org/2025/Conference
2025
title: There are valuable contributions in the paper summary: The paper suggests a way of measuring probability of misclassification of a machine learning model. The paper suggests to use intermediate layers activations in addition to the output layer activations in this process. It is known that when the model outputs uncertain predictions (maximum softmax probability is small) the model is more likely to misclassify, which is what currently known detectors such as MSP and DOCTOR do. Other layer activations, apart from the final softmax, are used in known methods, such as SafetyCage, but there are limitations of this approach which are solved by proposed paper. The paper proposes to project the activation statistics for each class into 1D spaces for each layer and class to obtain 1D probability densities using SPARDA method; and then use machine learning or statistical tests to classify between two states: correctly predicted and misclassified. This is in contrast to previous work, SafetyCage, which used out of distribution detection method: the proposed direct training of the final misclassification detector on both correctly and incorrectly classified examples is expected to improve robustness and accuracy of the detector. strengths: The work contains novel ideas, extending previously known misclassification detectors. Particularly, a hypothesis test based on density ratio between two distributions (positive and negative) is used, instead of out of distribution test on a single distribution (positive). This additionally leads to having less assumptions, particularly the distributions no longer need to be assumed to be Gaussian. The ideas are mathematically sound and well described and the experiments further confirm the theoretical results. weaknesses: First, there is a fundamental question about the extent to which we can use misclassification detectors to further enhance the prediction quality of the model. If the misclassification detector uses only information from the model itself, in case of misclassification that information will be incorrect and therefore may not be suitable for further judgement. However, the existance of other misclassification detectors shows that it is indeed possible to get extra information from the model itself. Still it will be nice to touch this question in the paper and provide additional justification about why the proposed detector is able to gather that extra information from the model. Second, there is a serious problem with tuning threshold on the test set, as described in line 308 mentioning "optimal threshold on the test data". The paper also mentions " threshold optimized on the training data" in line 319. I believe that both variants are not correct: test threshold shall never be used according to the golden rule of not using the testing set for any calculations that affect final predictions (would be better to remove such results); and training threshold is not optimal as suggested, due to possible overfitting. Instead, validation set should be used for this purpose. On Figures 3,4 we see that proposed method is very sensitive to the threshold, which additionally confirms the fact that threshold should be re-evaluated correctly on correct data subset. Additionally, evaluation is limited to two specific cases of image datasets: MNIST and CIFAR-10. would be nice to test the method on non-image datasets, such as the openly available datasets from the UCI datasets repository. confidence: 4 justification: The results are novel enough: improving both classical misclassification detectors, such as MSP, as well as modern detectors such as SafetyCage. There are problems related to threshold selection, mentioned in the Weaknesses section, but I believe that they can be easily corrected in the final version of the paper. Evaluation on additional non-image datasets would additionally raise credibility of the proposed method, but is enough to justify proposed theoretical contributions. final_rebuttal_confidence: 5 final_rebuttal_justification: - The rebuttal well addressed the main concern of multiple reviewers: threshold selection is now properly done on a separate validation dataset. However, limited datasets reviewer's question was not addressed. Additionally, proper theshold selection revealed that proposed method does not outperform all competitors on the given datasets. - My final rating is not changing, recommending acceptance. Lack of additional datasets evaluation and revealed updated results of comparison with other methods beg a question of requesting a further clarification on the part of the experiments, therefore I would highly encourage authors to additionally address the question of limited datasets evaluation (evaluate at least on another non-image dataset). This however does not invalidate the theoretical contribution, justifying the currently given rating.
3FswRo4Lhj
SPARDACUS SafetyCage: A new misclassification detector
[ "Pål Vegard Johnsen", "Filippo Remonato", "Shawn Benedict", "Albert Ndur-Osei" ]
Given the increasing adoption of machine learning techniques in society and industry, it is important to put procedures in place that can infer and signal whether the prediction of an ML model may be unreliable. This is not only relevant for ML specialists, but also for laypersons who may be end-users. In this work, we present a new method for flagging possible misclassifications from a feed-forward neural network in a general multi-class problem, called SPARDA-enabled Classification Uncertainty Scorer (SPARDACUS). For each class and layer, the probability distribution functions of the activations for both correctly and wrongly classified samples are recorded. Using a Sparse Difference Analysis (SPARDA) approach, an optimal projection along the direction maximizing the Wasserstein distance enables $p$-value computations to confirm or reject the class prediction. Importantly, while most existing methods act on the output layer only, our method can in addition be applied on the hidden layers in the neural network, thus being useful in applications, such as feature extraction, that necessarily exploit the intermediate (hidden) layers. We test our method on both a well-performing and under-performing classifier, on different datasets, and compare with other previously published approaches. Notably, while achieving performance on par with two state-of-the-art-level methods, we significantly extend in flexibility and applicability. We further find, for the models and datasets chosen, that the output layer is indeed the most valuable for misclassification detection, and adding information from previous layers does not necessarily improve performance in such cases.
[ "Misclassification detection", "uncertainty estimation", "Wasserstein distance", "hypothesis tests" ]
https://openreview.net/pdf?id=3FswRo4Lhj
https://openreview.net/forum?id=3FswRo4Lhj
L4dPFnudlQ
official_review
1,728,391,162,606
3FswRo4Lhj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission29/Reviewer_eSSQ" ]
NLDL.org/2025/Conference
2025
title: Statistical multi-layer misclassification detection for deep neural networks extending previous work summary: Paper presents an approach to infer reliability of machine learning, and here specifically (deep) neural network classifiers by introducing misclassification detection techniques that could be used for producing more safety AI systems, and which can be seen very important topic. Previously there have been several model-agnostic and neural network -specific solutions (e.g., MSP and DOCTOR) based on output (softmax)-layer only with thresholding the maximum output or approximating the misclassification probability. Proposed approach, instead, considers combination of any layer activation values as model inputs, as in the previous SafetyCage approach. Previous work is relying only on the probability distribution of correctly classified examples, Gaussian approximation, and Mahalanobis distance -based thresholding, whereas, to increase to robustness of previous methods, the proposed approach (SPARDACUS) utilises couple of new improvements: 1) using both the PDFs of correctly and incorrectly classified activation values, 2) applying projection techniques to produce 1D PDF from original high-dimensional PDF and combine these to mixture Gaussian distribution from different layers, and 3) likelihood test -inspired statical test for final detection. Together these brings novelty to misclassification detection research. Experimental results on two image classification benchmarks show similar or on par accuracy compared to state-of-the-art methods (MSP, Doctor), and outperforming original SafetyGace method. Also, interesting result is that in both cases output-layer is the most important one for the detection. Proposed method is sound and well-formulated. However, based on the result, the full potential of proposed approach against output-layer only methods is not indicated with these particular empirical datasets. strengths: Paper is clearly written and structured, and pretty easy to follow. The building blocks of the proposed method are well-justified in relation to previous approach, showing an interesting techniques to combine reliability information from different network layers. Although, building blocks by themselves are not novel, the combination gives new approach to misclassification detections, improving the previous methods on empirical evaluations and performs on par with state-of-the art output-layer-based methodologies, providing some interesting results and fresh ideas. weaknesses: Although proposed approach is interesting with some novelties, empirical evaluations are not fully convincing, showing the usefulness and full potential of multi-layer misclassification detection, at least, on these benchmark image classification datasets. Here some questions related to results section: - Why the results in the main text (Table 1 and 2) are shown with detection threshold optimized on test sets? That is not very practical, and should be based on training or validation data instead? (some of these are presented in appendix, but are, in my opinion, more important than those in the main text currently). - How sensitive the different methods are for choosing and optimizing the detection threshold (e.g., related to shape of the curves in Figures 3 and 4)? confidence: 4 justification: Paper is well-written and structured, providing a novel combination of techniques to misclassification detection domain. The results are in line with some of the state-of-the-art methodologies. Although paper gives some interesting results, full potential and benefits of using multi-layer detection are not shown; deeper analysis of the properties of the proposed methodologies and additional datasets from different application domains could strengthen the study. It would be also good to revise some of the results between appendix and main text (i.e., showing the results of using detection threshold choose by training data instead of test data in the main text and vice versa).
3FswRo4Lhj
SPARDACUS SafetyCage: A new misclassification detector
[ "Pål Vegard Johnsen", "Filippo Remonato", "Shawn Benedict", "Albert Ndur-Osei" ]
Given the increasing adoption of machine learning techniques in society and industry, it is important to put procedures in place that can infer and signal whether the prediction of an ML model may be unreliable. This is not only relevant for ML specialists, but also for laypersons who may be end-users. In this work, we present a new method for flagging possible misclassifications from a feed-forward neural network in a general multi-class problem, called SPARDA-enabled Classification Uncertainty Scorer (SPARDACUS). For each class and layer, the probability distribution functions of the activations for both correctly and wrongly classified samples are recorded. Using a Sparse Difference Analysis (SPARDA) approach, an optimal projection along the direction maximizing the Wasserstein distance enables $p$-value computations to confirm or reject the class prediction. Importantly, while most existing methods act on the output layer only, our method can in addition be applied on the hidden layers in the neural network, thus being useful in applications, such as feature extraction, that necessarily exploit the intermediate (hidden) layers. We test our method on both a well-performing and under-performing classifier, on different datasets, and compare with other previously published approaches. Notably, while achieving performance on par with two state-of-the-art-level methods, we significantly extend in flexibility and applicability. We further find, for the models and datasets chosen, that the output layer is indeed the most valuable for misclassification detection, and adding information from previous layers does not necessarily improve performance in such cases.
[ "Misclassification detection", "uncertainty estimation", "Wasserstein distance", "hypothesis tests" ]
https://openreview.net/pdf?id=3FswRo4Lhj
https://openreview.net/forum?id=3FswRo4Lhj
Ax7QQ6f3Fm
decision
1,730,901,555,747
3FswRo4Lhj
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: Given the AC positive recommendation, we recommend an oral and a poster presentation given the AC and reviewers recommendations.
1U0kkt7ymn
Transformers at a Fraction
[ "Aritra Mukhopadhyay", "Rucha Bhalchandra Joshi", "Nidhi Tiwari", "Subhankar Mishra" ]
Transformer-based large models, such as GPT, are known for their performance and ability to effectively address tasks. Transformer-based models often have many parameters, which are trained to achieve high-performance levels. As a result, they cannot be run locally on devices with smaller memory sizes, such as mobile phones, necessitating the use of these models remotely by sending the data to the cloud. This exposes us to privacy concerns over sending confidential data to the server, among others. In this work, we propose a method to make these large models easier to run on devices with much smaller memory while sacrificing little to no performance. We investigate quaternion neural networks, which can reduce the number of parameters to one-fourth of the original real-valued model when employed efficiently. Additionally, we explore sparse networks created by pruning weights as a method of parameter reduction, following the Lottery Ticket Hypothesis. We perform the experiments on vision and language tasks on their respective datasets. We observe that pruned quaternion models perform better than the real-valued models in severely sparse conditions.
[ "transformers", "quaternion neural networks", "LTH", "pruning", "parameter reduction" ]
https://openreview.net/pdf?id=1U0kkt7ymn
https://openreview.net/forum?id=1U0kkt7ymn
q8igqjyZcE
decision
1,730,901,555,878
1U0kkt7ymn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations.
1U0kkt7ymn
Transformers at a Fraction
[ "Aritra Mukhopadhyay", "Rucha Bhalchandra Joshi", "Nidhi Tiwari", "Subhankar Mishra" ]
Transformer-based large models, such as GPT, are known for their performance and ability to effectively address tasks. Transformer-based models often have many parameters, which are trained to achieve high-performance levels. As a result, they cannot be run locally on devices with smaller memory sizes, such as mobile phones, necessitating the use of these models remotely by sending the data to the cloud. This exposes us to privacy concerns over sending confidential data to the server, among others. In this work, we propose a method to make these large models easier to run on devices with much smaller memory while sacrificing little to no performance. We investigate quaternion neural networks, which can reduce the number of parameters to one-fourth of the original real-valued model when employed efficiently. Additionally, we explore sparse networks created by pruning weights as a method of parameter reduction, following the Lottery Ticket Hypothesis. We perform the experiments on vision and language tasks on their respective datasets. We observe that pruned quaternion models perform better than the real-valued models in severely sparse conditions.
[ "transformers", "quaternion neural networks", "LTH", "pruning", "parameter reduction" ]
https://openreview.net/pdf?id=1U0kkt7ymn
https://openreview.net/forum?id=1U0kkt7ymn
cUJVRv9FOb
official_review
1,727,779,897,026
1U0kkt7ymn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission31/Reviewer_xFiY" ]
NLDL.org/2025/Conference
2025
title: A good paper which needs more details about the theory behind the method summary: The authors present a new deep learning method using quaternions to improve computation and memory requirements for use on smaller, local hardware. They also show that their method can be combined with the Lottery Ticket Hypothesis to prune the neural network and achieve even smaller neural networks, requiring even less memory usage. strengths: The paper reads well and compares their approach thoroughly with standard – real-numbered – neural networks on vision and text tasks. Deep Learning methods are challenging to bring into settings where GPU/accelerated hardware is not available (e.g., handheld, medical and remote devices) and the combinations of the two methods presented by the authors are promising. I enjoyed that the pruning is tested on both real and quaternions models, which acts as an ablation study. weaknesses: The authors focus on parameter count, but there is no description and testing of memory and computation requirements at runtime, under the form of FLOPS, and MB of (V)RAM being used. There is a description of the quaternion model in part 2 as well as a figure, but a mathematical description of the method is lacking (i.e., an equation or algorithm showing the concatenation of the q matrices to obtain W^r multiplied with the input). If space is needed, Algorithm 1 could be removed, as those operations are fairly straightforward and aligned with the LTH paper. Although the paper states that only 25% of the parameters are required when using their approach (thus reducing the model weight), the computation may still require the same amount of used memory and computations, as the W^r matrix needs to be computed. Perhaps some optimizations were made, where the structure of W^r is exploited to enable faster computations and a smaller memory footprint, but this is not mentioned in the text. Minor comments: L015-L016: The general case of quaternion neural networks do not necessarily achieve a weight reduction of 75%, the authors purposely reduce the number of used parameters by structuring the weight matrix accordingly. L087: Cosine similarity is perhaps a misnomer here, as the values are in [0,1] instead of [-1,1]. Dot product is unnormalized (what is being done) rather than the cosine function. L105: I think it is rather confusing to use r,i,j,k as coefficient names (L114) instead of a,b,c,d as mentioned in L099. L109: there is a "." before the citation. Figure 1: Add the row/column sizes of the W^r matrix; The W^r matrix seem to have only four quaternions being reused at every row in a rolling pattern, but more colors should be used to highlight the fact that each q block is indeed different. confidence: 5 justification: I have been involved in work requiring pruning neural networks before, as well as work related to quaternions, although not directly linked to deep learning as in this paper.
1U0kkt7ymn
Transformers at a Fraction
[ "Aritra Mukhopadhyay", "Rucha Bhalchandra Joshi", "Nidhi Tiwari", "Subhankar Mishra" ]
Transformer-based large models, such as GPT, are known for their performance and ability to effectively address tasks. Transformer-based models often have many parameters, which are trained to achieve high-performance levels. As a result, they cannot be run locally on devices with smaller memory sizes, such as mobile phones, necessitating the use of these models remotely by sending the data to the cloud. This exposes us to privacy concerns over sending confidential data to the server, among others. In this work, we propose a method to make these large models easier to run on devices with much smaller memory while sacrificing little to no performance. We investigate quaternion neural networks, which can reduce the number of parameters to one-fourth of the original real-valued model when employed efficiently. Additionally, we explore sparse networks created by pruning weights as a method of parameter reduction, following the Lottery Ticket Hypothesis. We perform the experiments on vision and language tasks on their respective datasets. We observe that pruned quaternion models perform better than the real-valued models in severely sparse conditions.
[ "transformers", "quaternion neural networks", "LTH", "pruning", "parameter reduction" ]
https://openreview.net/pdf?id=1U0kkt7ymn
https://openreview.net/forum?id=1U0kkt7ymn
Q8PJ9UV4y1
official_review
1,728,393,081,950
1U0kkt7ymn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission31/Reviewer_iDxz" ]
NLDL.org/2025/Conference
2025
title: Review of Transformers at a fraction summary: The authors investigate the accuracy of quaternion transformers and the accuracy of Lottery Ticket Hypothesis (LTH) in pruning (quaternion) transformers. Specifically the Vision Transformer (ViT) and nanoGPT architectures are investigated. strengths: The authors performed an extensive evaluation of their proposed approach on existing transformer architectures. Through experimental validation, the authors show that incorporating quaternions in a transformer architecture and performing pruning using LTH increases the computational efficiency while maintaining performance. Furthermore, the implementation is available open source as a PyTorch package. weaknesses: It would be better to formulate the 4 research questions as research contributions. Then it is also easier to emphasise the novelty in this paper. For example, quaternion transformers exist. Does this paper mean that this extensive evaluation is not available in literature? If this evaluation is done here for the first time, it could be more emphasised. Also in the literature study in this paper, no references are made to existing Quaternion Transformers networks. Idem for the Lottery Ticket Hypothesis there are gaps in the literature study, for example: https://arxiv.org/abs/2005.03454 Does this paper tackle an additional problem related specifically to the ViT or nanoGPT architecture? confidence: 2 justification: I am not in expert in the current state of the art regarding Transformers, but I think the paper overall contains an interesting contribution that is worth sharing. Although I would say the current shortcomings needs to be addressed before the paper can be accepted.
1U0kkt7ymn
Transformers at a Fraction
[ "Aritra Mukhopadhyay", "Rucha Bhalchandra Joshi", "Nidhi Tiwari", "Subhankar Mishra" ]
Transformer-based large models, such as GPT, are known for their performance and ability to effectively address tasks. Transformer-based models often have many parameters, which are trained to achieve high-performance levels. As a result, they cannot be run locally on devices with smaller memory sizes, such as mobile phones, necessitating the use of these models remotely by sending the data to the cloud. This exposes us to privacy concerns over sending confidential data to the server, among others. In this work, we propose a method to make these large models easier to run on devices with much smaller memory while sacrificing little to no performance. We investigate quaternion neural networks, which can reduce the number of parameters to one-fourth of the original real-valued model when employed efficiently. Additionally, we explore sparse networks created by pruning weights as a method of parameter reduction, following the Lottery Ticket Hypothesis. We perform the experiments on vision and language tasks on their respective datasets. We observe that pruned quaternion models perform better than the real-valued models in severely sparse conditions.
[ "transformers", "quaternion neural networks", "LTH", "pruning", "parameter reduction" ]
https://openreview.net/pdf?id=1U0kkt7ymn
https://openreview.net/forum?id=1U0kkt7ymn
B8nuqAWm1P
meta_review
1,730,316,649,377
1U0kkt7ymn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission31/Area_Chair_77ov" ]
NLDL.org/2025/Conference
2025
metareview: In this paper the authors show that quaternion neural networks and sparse networks created by pruning weights can reduce the number of parameters in transformer-based models, making them easier to run on devices with smaller memory. Experiments on vision and language tasks show that pruned quaternion models perform better than real-valued models in severely sparse conditions. The proposed method is implemented in a PyTorch package called qytorch. Quaternion variations in Transformer models reduce weight parameters by one-fourth compared to real-valued Transformers. Pruning these Quaternion-based models using the Lottery Ticket Hypothesis maintains performance comparable to real-valued Transformers. Combining them with other techniques like quantisation and pruning can lead to more efficient and deployable models. These are some neat results worth publishing and presenting at the conference. recommendation: Accept (Oral) suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down confidence: 5: The area chair is absolutely certain
1U0kkt7ymn
Transformers at a Fraction
[ "Aritra Mukhopadhyay", "Rucha Bhalchandra Joshi", "Nidhi Tiwari", "Subhankar Mishra" ]
Transformer-based large models, such as GPT, are known for their performance and ability to effectively address tasks. Transformer-based models often have many parameters, which are trained to achieve high-performance levels. As a result, they cannot be run locally on devices with smaller memory sizes, such as mobile phones, necessitating the use of these models remotely by sending the data to the cloud. This exposes us to privacy concerns over sending confidential data to the server, among others. In this work, we propose a method to make these large models easier to run on devices with much smaller memory while sacrificing little to no performance. We investigate quaternion neural networks, which can reduce the number of parameters to one-fourth of the original real-valued model when employed efficiently. Additionally, we explore sparse networks created by pruning weights as a method of parameter reduction, following the Lottery Ticket Hypothesis. We perform the experiments on vision and language tasks on their respective datasets. We observe that pruned quaternion models perform better than the real-valued models in severely sparse conditions.
[ "transformers", "quaternion neural networks", "LTH", "pruning", "parameter reduction" ]
https://openreview.net/pdf?id=1U0kkt7ymn
https://openreview.net/forum?id=1U0kkt7ymn
4kJZPGiQV0
official_review
1,728,284,933,735
1U0kkt7ymn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission31/Reviewer_Xg9j" ]
NLDL.org/2025/Conference
2025
title: Review for transformers at a fraction summary: This manuscript proposes a method to reduce the number of parameters so that the transformer-based large models can be used in on-device environments without significant performance degradation. By constructing the weight matrices of query, key, and value of transformer from quaternion components, the authors decrease the parameters of the transformer by $1\over4$. Also, they further diminish the number of parameters by pruning based on the Lottery Ticket Hypothesis (LTH). They demonstrate that their model shows similar or slightly lower performance than the original models with much fewer parameters on computer vision and language modeling tasks using Vision Transformer (ViT) and nanoGPT. strengths: By simply applying the quaternion method, the number of parameters of the transformer is reduced to $1 \over 4$ without any decrease in performance. \ Through the quaternion method, the performance degradation is less or similar to the original model when the prune percentage is high (Fig. 3, 7), and the diversity of the pruned weights is also increased (Fig 4, 8). \ They distribute the PyPi package of quaternion transformer to help with follow-up research. weaknesses: The quaternion method you applied requires that the real weight matrix be divisible by a multiple of 4, so adjustments such as dummy class padding are required. \ If the number of parameters after pruning is provided, the parameter reduction performance can be shown more clearly. \ What criteria were used to select the prune rate $p$, such as $p=0.4$ for ViT on CIFAR10 datasets? \ The models used in the experiments are insufficient to be called large models. It was good to show the results for each task, but it is unclear whether their method can work sufficiently in a large model. \ The contribution is not enough. Methods for reducing the parameters of the transformer based on the quaternion method have been proposed previously [1], and also LTH-based pruning methods are previously used [2, 3]. \ Furthermore, the proposed method requires iterative pruning with $n_p$ steps since it is based on the initial version of LTH-based pruning, which increases the model learning time. If you use the methods for obtaining ‘winning tickets’ faster [4, 5], the efficiency of your model can be improved. \ It is difficult to fully understand what the authors claim, like model pruning or parameter reductions from the current title. Minor * Table 1: There is no result about OpenWebText dataset * (typo) line 259: “CIFAR” -> “MedMNIST” * (typo) line 227: Prune rate might be “0.3, 0.5, and 0.3” -> “0.3, 0.5, and 0.7” * The reference page needs to be rearranged [1] Tay, Y., Zhang, A., Tuan, L. A., Rao, J., Zhang, S., Wang, S., ... & Hui, S. C. (2019). Lightweight and efficient neural natural language processing with quaternion networks. arXiv preprint arXiv:1906.04393. \ [2] Morcos, A., Yu, H., Paganini, M., & Tian, Y. (2019). One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers. Advances in neural information processing systems, 32. \ [3] Yu, H., Edunov, S., Tian, Y., & Morcos, A. S. (2019). Playing the lottery with rewards and multiple languages: lottery tickets in rl and nlp. arXiv preprint arXiv:1906.02768. \ [4] Lee, N., Ajanthan, T., & Torr, P. H. (2018). Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340. \ [5] Wang, C., Zhang, G., & Grosse, R. (2020). Picking winning tickets before training by preserving gradient flow. arXiv preprint arXiv:2002.07376. confidence: 3 justification: Their method can stably reduce many parameters while maintaining performance by replacing the weight matrices of the transformer with quaternion. Their method shows similar performance with fewer parameters in small transformer-based models, but it has not been proven whether it works well in large models. Nevertheless, it can be widely used for the models that utilize the transformer with less parameters and has scalability by providing a PyPi package. final_rebuttal_confidence: 3 final_rebuttal_justification: The authors explain the weaknesses I pointed out well, and their arguments are valid. The lacking experiments have been supplemented in the rebuttal, and their ideas are novel.
14ptPJP6fG
Familiarity-Based Open-Set Recognition Under Adversarial Attacks
[ "Philip Enevoldsen", "Christian Gundersen", "Nico Lang", "Serge Belongie", "Christian Igel" ]
Open-set recognition (OSR), the identification of novel categories, can be a critical component when deploying classification models in real-world applications. Recent work has shown that familiarity-based scoring rules such as the Maximum Softmax Probability (MSP) or the Maximum Logit Score (MLS) are strong baselines when the closed-set accuracy is high. However, one of the potential weaknesses of familiarity-based OSR are adversarial attacks. Here, we study gradient-based adversarial attacks on familiarity scores for both types of attacks, False Familiarity and False Novelty attacks, and evaluate their effectiveness in informed and uninformed settings on TinyImageNet. Furthermore, we explore how novel and familiar samples react to adversarial attacks and formulate the adversarial reaction score as an alternative OSR scoring rule, which shows a high correlation with the MLS familiarity score.
[ "Open-set recognition", "Adversarial attacks" ]
https://openreview.net/pdf?id=14ptPJP6fG
https://openreview.net/forum?id=14ptPJP6fG
xtQ5J9BWeJ
meta_review
1,730,465,206,744
14ptPJP6fG
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission19/Area_Chair_UtMN" ]
NLDL.org/2025/Conference
2025
metareview: ## Paper Summary The paper investigates the vulnerability of Open-Set Recognition (OSR) systems to adversarial attacks of two distinct types: 1) False Familiarity (or False Negative), when the objective is to lower the logit scores of familiar classes; and 2) False Novelty (or False Positive), which aims for an increasing on the logits of the novel class so that it is regarded as familiar to an OSR system. The authors also explore different ways of generating adversarial inputs (based on the Fast Gradient Sign Method and an iterative approach), as well as whether the attack is informed or uniformed, meaning that the OSR system may or may not know _a priori_ which type of attack it will deal with. Experiments conducted on the TinyImageNet dataset with diverse levels of adversarial perturbation showed that the logits can be easily increased with those adversarial perturbations, which leads to a higher effectiveness of FN attacks in the informed scenario. On the other hand, for the uninformed scenario, it was shown that FP attacks were more effective, as they destroy the original classification rankings by hiding familiar features, corroborating the Familiarity Hypothesis postulated by Dietterich & Guyer. Although there is no conclusion about which scenario (informed or uninformed) is better, the authors showed that iterative attacks are more effective than FGSM, which opens the door to further investigations on this specific modality. Finally, the authors proposed and dicussed an alternative metric for OSR systems called Adversarial Reaction Score (ARS), analyzing its potential correlations against well-established OSR score metrics, such as the Maximum Logit Score (MLS) and the Maximum Softmax Probability (MSP). ## General Comments The paper looks well-written, with clear research questions and showcases a substantial variety of test scenarios, seeming to be pretty hard to make them fit in a 6-page manuscript. Even though the proposed ARS metric has not shown a significant improvement, some important questions have been raised about the possibilities of adversarial attacks and their respective effects under an open-set scenario, which could inspire other researchers in the field to explore new possibilities. Nevertheless, I believe the authors did a great job, especially after having addressed all the reviewers' suggestions. I hereby recommend the acceptance of this paper. ## Strengths * Great paper organization, with very informative illustrations and examples, especially Figures 1 and 2; * High variety of research questions, which were properly addressed along the manuscript; * Highlights the potential perils of adversarial attacks based on image perturbations, which incentivates the research for more robust OSR methods. ## Weaknesses * The concepts presented in the paper are not much novel; * Absence of comparison against other networks, having only conducted tests on the VGG32 architecture; * Adoption of ARS as a new OSR metric is still questionable, although there is room for improvements; recommendation: Accept (Oral) suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down confidence: 4: The area chair is confident but not absolutely certain
14ptPJP6fG
Familiarity-Based Open-Set Recognition Under Adversarial Attacks
[ "Philip Enevoldsen", "Christian Gundersen", "Nico Lang", "Serge Belongie", "Christian Igel" ]
Open-set recognition (OSR), the identification of novel categories, can be a critical component when deploying classification models in real-world applications. Recent work has shown that familiarity-based scoring rules such as the Maximum Softmax Probability (MSP) or the Maximum Logit Score (MLS) are strong baselines when the closed-set accuracy is high. However, one of the potential weaknesses of familiarity-based OSR are adversarial attacks. Here, we study gradient-based adversarial attacks on familiarity scores for both types of attacks, False Familiarity and False Novelty attacks, and evaluate their effectiveness in informed and uninformed settings on TinyImageNet. Furthermore, we explore how novel and familiar samples react to adversarial attacks and formulate the adversarial reaction score as an alternative OSR scoring rule, which shows a high correlation with the MLS familiarity score.
[ "Open-set recognition", "Adversarial attacks" ]
https://openreview.net/pdf?id=14ptPJP6fG
https://openreview.net/forum?id=14ptPJP6fG
vEfjdpVQCC
decision
1,730,901,555,246
14ptPJP6fG
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations.
14ptPJP6fG
Familiarity-Based Open-Set Recognition Under Adversarial Attacks
[ "Philip Enevoldsen", "Christian Gundersen", "Nico Lang", "Serge Belongie", "Christian Igel" ]
Open-set recognition (OSR), the identification of novel categories, can be a critical component when deploying classification models in real-world applications. Recent work has shown that familiarity-based scoring rules such as the Maximum Softmax Probability (MSP) or the Maximum Logit Score (MLS) are strong baselines when the closed-set accuracy is high. However, one of the potential weaknesses of familiarity-based OSR are adversarial attacks. Here, we study gradient-based adversarial attacks on familiarity scores for both types of attacks, False Familiarity and False Novelty attacks, and evaluate their effectiveness in informed and uninformed settings on TinyImageNet. Furthermore, we explore how novel and familiar samples react to adversarial attacks and formulate the adversarial reaction score as an alternative OSR scoring rule, which shows a high correlation with the MLS familiarity score.
[ "Open-set recognition", "Adversarial attacks" ]
https://openreview.net/pdf?id=14ptPJP6fG
https://openreview.net/forum?id=14ptPJP6fG
kr9JzPkTgh
official_review
1,726,693,284,935
14ptPJP6fG
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission19/Reviewer_6NYm" ]
NLDL.org/2025/Conference
2025
title: Overall the submitted article is well-written and well explained summary: The paper mainly focuses on exploring how to evaluate the effect of adversarial attacks on the familiarity-based OSR. It discusses the effects of different metrics based on the type of attack. The article is well-written and elaborates on the proposed approach to adversarial attacks. It discusses the impact on performance based on the metric selected for optimization. The authors also proposed a new scoring method but it doesn't seem to improve upon the existing methods. strengths: The authors have done a good job of elaborating and explaining different types of attacks as well as the different objective functions being used. Along with this, they also have done a good job of discussing the methodology in detail. weaknesses: From the reviewer's perspective it feels more details could be added for the experiments and the explanation of the figures as it would improve the readability of the paper. It is critical to include the future directions for the improvements and how the authors see their work contributing to the research in the longer term since the metrics proposed by the authors don't necessarily improve the existing ones. confidence: 4 justification: Overall, it is a well-written paper that elaborates on the necessary technical details of the proposed approach. The paper could be improved by adding more details to the experimental results section and highlighting the future directions for the work.
14ptPJP6fG
Familiarity-Based Open-Set Recognition Under Adversarial Attacks
[ "Philip Enevoldsen", "Christian Gundersen", "Nico Lang", "Serge Belongie", "Christian Igel" ]
Open-set recognition (OSR), the identification of novel categories, can be a critical component when deploying classification models in real-world applications. Recent work has shown that familiarity-based scoring rules such as the Maximum Softmax Probability (MSP) or the Maximum Logit Score (MLS) are strong baselines when the closed-set accuracy is high. However, one of the potential weaknesses of familiarity-based OSR are adversarial attacks. Here, we study gradient-based adversarial attacks on familiarity scores for both types of attacks, False Familiarity and False Novelty attacks, and evaluate their effectiveness in informed and uninformed settings on TinyImageNet. Furthermore, we explore how novel and familiar samples react to adversarial attacks and formulate the adversarial reaction score as an alternative OSR scoring rule, which shows a high correlation with the MLS familiarity score.
[ "Open-set recognition", "Adversarial attacks" ]
https://openreview.net/pdf?id=14ptPJP6fG
https://openreview.net/forum?id=14ptPJP6fG
a6YDvMAcMz
official_review
1,727,966,991,834
14ptPJP6fG
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission19/Reviewer_dRG9" ]
NLDL.org/2025/Conference
2025
title: First Review summary: The authors investigated the concept of adversarial Out-of-Distribution (OOD) examples [1]. More in detail, the paper focuses on two OOD detection methods (Maximum Softmax Probability (MSP) and Maximum Logit Score (MLS)) and gradient-based adversarial attacks (FGSM and BIM). The aim is generating adversarial In-Distribution examples (false negatives), wrongly accepted by an OOD detector, and adversarial OOD examples (false positives), with the reverse effect. The authors proposed different definitions of attack towards either ID or OOD samples, and also validated strategy is best when an attacker does not know if a sample is ID or OOD. Last, the adversarial OOD attack was suggested as a strategy to improve OOD detection, however the benchmark did not improve over the baseline (MLS). [1] "Adversarial OOD examples are constructed w.r.t the OOD detector, which is different from the standard notion of adversarial examples (constructed w.r.t the classification model).", from Chen, Jiefeng, et al. "Atom: Robustifying out-of-distribution detection using outlier mining." Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part III 21. Springer International Publishing, 2021. strengths: The paper is clear, and the experimental setup is based on established OOD benchmarks [1], which enhances reproducibility. In my view, the paper's most significant finding is that when an attacker is unaware of the OOD detector's output, targeting false positives results in a more severe drop of performance compared to focusing on false negatives. [1] S. Vaze, K. Han, A. Vedaldi, and A. Zisser- man. “Open-Set Recognition: A Good Closed- Set Classifier is All You Need”. In: Interna- tional Conference on Learning Representations (ICLR). 2022. weaknesses: - The main weakness is that the concept of adversarial ID/OOD is not novel [1,2,5], therefore the authors should reference and address existing baselines. In [1,2] adversarial OOD/ID have been defined for both softmax and distance based OOD detectors, with stronger white- and black-box attacks, by also introducing approaches to make detection more reliable. - The authors proposed a white-box attack, therefore it seems to me an unrealistic constraint to suppose that the attacker has access to the model weights but not the ID/OOD label (that the attacker can compute given the OOD output). - The ARS does not improve over the baseline, even in a scenario where (correct me if I am wrong) ARS hyperparameters have been selected on the test set (instead of a validation set). In this regard, a previous similar approaches, considering the difference between normal and perturbed logits for detection, could help the authors improve their work [3]. - The experimental setup is quite limited (one model, one ID dataset). - The literature review on generalized OOD detection should be updated with more recent benchmarks [4]. [1] Chen, Jiefeng, et al. "Atom: Robustifying out-of-distribution detection using outlier mining." Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part III 21. Springer International Publishing, 2021. [2] Chen, Jiefeng, et al. "Robust out-of-distribution detection for neural networks." arXiv preprint arXiv:2003.09711 (2020). [3] Roth, Kevin, Yannic Kilcher, and Thomas Hofmann. "The odds are odd: A statistical test for detecting adversarial examples." International Conference on Machine Learning. PMLR, 2019. [4] Zhang, Jingyang, et al. "Openood v1. 5: Enhanced benchmark for out-of-distribution detection." arXiv preprint arXiv:2306.09301 (2023). [5] Azizmalayeri, Mohammad, et al. "Your out-of-distribution detection method is not robust!." Advances in Neural Information Processing Systems 35 (2022): 4887-4901. confidence: 4 justification: The main concern with the paper is that the authors did not address relevant literature on adversarial ID/OOD attacks. Additionally, the results remain preliminary, as they are restricted to softmax OOD detectors and gradient-based attacks, and fail to offer substantial contributions, such as improved robustness in OOD detection. final_rebuttal_confidence: 4 final_rebuttal_justification: As I mentioned in my original review, I believe the paper has a clear presentation. Initially, I had concerns regarding existing frameworks for ID/OOD adversarial examples. However, I reconsidered this given that the authors specifically focused on Open Set Recognition (OSR), whereas prior work concentrated mostly on OOD. My second concern was on the significance of the results, that the authors have improved (or will improve in the camera-ready) by better explaining the experiments and by including additional analysis (Novel vs Familiar MLS, multiple $\epsilon$-values for the informed attack). My final opinion is inclined toward accepting the work.
14ptPJP6fG
Familiarity-Based Open-Set Recognition Under Adversarial Attacks
[ "Philip Enevoldsen", "Christian Gundersen", "Nico Lang", "Serge Belongie", "Christian Igel" ]
Open-set recognition (OSR), the identification of novel categories, can be a critical component when deploying classification models in real-world applications. Recent work has shown that familiarity-based scoring rules such as the Maximum Softmax Probability (MSP) or the Maximum Logit Score (MLS) are strong baselines when the closed-set accuracy is high. However, one of the potential weaknesses of familiarity-based OSR are adversarial attacks. Here, we study gradient-based adversarial attacks on familiarity scores for both types of attacks, False Familiarity and False Novelty attacks, and evaluate their effectiveness in informed and uninformed settings on TinyImageNet. Furthermore, we explore how novel and familiar samples react to adversarial attacks and formulate the adversarial reaction score as an alternative OSR scoring rule, which shows a high correlation with the MLS familiarity score.
[ "Open-set recognition", "Adversarial attacks" ]
https://openreview.net/pdf?id=14ptPJP6fG
https://openreview.net/forum?id=14ptPJP6fG
ZC7TEXIsuO
official_review
1,728,174,590,069
14ptPJP6fG
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission19/Reviewer_EGqu" ]
NLDL.org/2025/Conference
2025
title: Interesting experimental analysis into (a) best practices for and (b) comparisons between adversarial attacks for open set recognition. summary: This paper performs an analysis into adversarial attacks for open set recognition (OSR)†, that is methods that can trick ML classification systems into either believing an in-distribution class is actually out-of-distribution (called a false novelty or FP attack) or vice versa (called a false familiarity or FN attack). This paper first introduces these concepts (section 1) and attack methods (mainly variants of existing adversarial attack methods; section 2), before then performing experiments (section 3) assessing the following aspects of OSR attacks: 1. Whether it's easier to perform an FP attack or FN attack (Figures 2–3), which is perhaps linked to how the OSR approach actually works (i.e., whether it looks for the presence of "unusual features" or the absence of "usual features"). _results: FN attacks are easier (when the attacker knows whether the image belongs to closed or open set)._ 2. How iterative attacks compare to single step attacks (e.g., FGSM [10]) in effectiveness (Figure 3). _results: iterative attacks are found to be more effective and almost able to perfectly attack the classifier under consideration._ 3. How uninformed attacks compare against informed attacks (an uninformed attacker does not know whether the input image is currently in or out of distribution, or in other words part of the closed set). _results: an attacker can do much better in an informed attack (as expected), but perhaps more surprisingly this affects FN attacks more than FP attacks._ 4. Can downstream effects of adversarial attacks be used as an OSR method (Section 2.6 and Figure 4). _results: yes, but not sufficiently better than the max logit approach, which is a much simpler method._ † The OSR is done here by looking at the magnitudes of the logits of a classifier. However, the paper also investigates whether one can use downstream effects of adversarial attacks as an OSR method itself (Section 2.6 of paper and point 4 above). strengths: I have broken down my view of the paper's strengths along the suggested axes of correctness, quality, clarity, and significance below. ### Correctness The experimental setup and inferences made from the results seem reasonable, as well as the baseline model/weights chosen (based off of [3]). One small caveat is that only the TinyImageNet task is considered (rather than considering all of those in [3]). ### Quality Overall the experiments seem interesting, although there are a couple of aspects which I think the paper could improve, which I have listed in the weakness section later. Things that I think the paper did well: * The paper's analysis may be helpful for understanding how OSR techniques work and aide their future development. To explain further: it is unclear currently how much OSR techniques make use of (a) "usual features" being missing versus (b) "unusual features" being present, when deciding whether an image is not part of the closed set. The relative ease of FN attacks in the informed setting suggests that (a) is the case, corroborating recent work [9]. * I also appreciated the fact that the authors properly evaluated the ARS approach for OSR, showing that actually it was not clearly more effective than existing approaches. ### Clarity Overall, I thought the paper was clearly written and well presented. Figure 1 was very helpful in laying out the problem and the methods were well defined in Section 2. I only have some more minor feedback in terms of presentation: * I think it would have been better to have devoted more space to informed attacks (as more important) and less to uninformed (see W1 below). * I was not sure what the solid lines were in Figure 4b. * I found the notation around losses/objectives (e.g., equations 5–7) a little confusing. Sometimes these are losses that are minimized and sometimes these are objectives that are maximized. This could have been made more consistent (e.g., by sticking to losses only and introducing negative signs as appropriate). * Perhaps a little more explanation in Section 2.6 would have been helpful (I did not realize at first that this was introducing an approach for OSR rather than a way to measure attack effectiveness). ### Significance I thought the paper contained several results interesting to other members of the community: * The paper's experiments (see quality section above) help explain how OSR attacks might work and best practices for currently carrying them out (which is needed to develop better defenses). * The strong success of iterative informed attacks (lines 298–321) may lead to more future research into preventing this. * The fact that ARS did not work that well, means that although maybe not a method able to replace existing OSR techniques, it might spur more research into these ideas. weaknesses: ### W1 More focus currently on uninformed attacks versus informed attacks More focus seems to be on the uninformed attacks (figure 2) rather than the informed attacks (figure 3). This means that there is less relative space to analyze the informed attacks, which I think are the more interesting of the two. For instance, the difference between using different objective functions for the informed attacks is not shown in the paper's figures, which would have been nice to include. ### W2 Missing analysis into how attack's objective interacts with OSR method Currently the different attacks are evaluated against only one OSR method (MLS). It is unclear whether the presented findings would apply to other methods (e.g., the mentioned MSP or even techniques using separate classifiers such as [7]), and so somewhat reduces the potential significance of the paper. ### Misc. Other Questions Q1. Line 309 says "max loss" is better for FN attacks; however, from Figure 2a I would have thought Log-MSP is actually slightly better? Q2: For FN attacks does the L2 Norm loss not have a similar limitation as to when it is used in the FP setting, which is explained on line 198 (i.e., that you can make a logit negative to increase this score without this affecting the MLS score and subsequent classification as not part of the closed set)? Q3: I was a little confused that the MLS went back down in Figure 2c or back up in Figure 2d as $\epsilon$ increased, even though the AUROC continued to go down (i.e., the attack worked better from the attacker's perspective). Is there any intuition for this result? Is the same effect seen with the iterative attacks? Q4: Figure 2c & d only show the median of the max logit score. What do the distributions of these scores look like? Presumably they are somewhat bimodal (due to the images from both the familiar and novel classes being included)? ### Minor typos (does not affect my score/review) * line 104: "evaluate" -> "evaluated" * Figure 2 captions on subplots: "advesarial" -> "adversarial" * line 267: "uniformed" -> "uninformed" confidence: 3 justification: I think this paper provides an interesting analysis into adversarial attacks for OSR, comparing and finding best practices for choosing objectives and performing attacks. FN attacks are found to be easier in the informed setting, corroborating recent suggestions [9] that it may be easier to increase logit scores (and "usual features") than depress them. I think this information would be interesting for others and so have gone with a higher score, urging acceptance, although a lower confidence score due my familiarity with some of the related work. final_rebuttal_confidence: 3 final_rebuttal_justification: Having read the rebuttal and the other reviews, I still hold similar views to my original review. I thought the analysis into adversarial attacks for OSR was interesting, although limited in the OSR methods assessed (authors also acknowledge this). I thought the authors did a good job with the rebuttal and answered several of my questions, also promising to provide more information on the more interesting informed attacks. It was good to see one of the other reviewer's concerns seemingly resolved as a typo. I've gone with a lower confidence score due my familiarity with some of the related work.
14ptPJP6fG
Familiarity-Based Open-Set Recognition Under Adversarial Attacks
[ "Philip Enevoldsen", "Christian Gundersen", "Nico Lang", "Serge Belongie", "Christian Igel" ]
Open-set recognition (OSR), the identification of novel categories, can be a critical component when deploying classification models in real-world applications. Recent work has shown that familiarity-based scoring rules such as the Maximum Softmax Probability (MSP) or the Maximum Logit Score (MLS) are strong baselines when the closed-set accuracy is high. However, one of the potential weaknesses of familiarity-based OSR are adversarial attacks. Here, we study gradient-based adversarial attacks on familiarity scores for both types of attacks, False Familiarity and False Novelty attacks, and evaluate their effectiveness in informed and uninformed settings on TinyImageNet. Furthermore, we explore how novel and familiar samples react to adversarial attacks and formulate the adversarial reaction score as an alternative OSR scoring rule, which shows a high correlation with the MLS familiarity score.
[ "Open-set recognition", "Adversarial attacks" ]
https://openreview.net/pdf?id=14ptPJP6fG
https://openreview.net/forum?id=14ptPJP6fG
MFwIDG1tmn
official_review
1,726,845,869,233
14ptPJP6fG
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission19/Reviewer_tBQx" ]
NLDL.org/2025/Conference
2025
title: Well motivated research, but there are concerns on the main claims summary: Open-set recognition (OSR) task is a special type of out-of-distribution (OOD) task. In OSR, we need to distinguish samples from novel classes (unseen during training) and familiar (known) classes. Recent works have demonstrated that familiarity-based scores can achieve competitive performances in OSR tasks. Familiarity-based scores use the maximum estimated probability of (known) classes (Maximum SoftMax Probability, MSP) or the maximum logit output of (known) classes (Maximum Logit Score, MLS). The hypothesis of Dietterich & Guyer can be summarized as "the logit score on only one class can be easily increased with an adversarial perturbation while it might be more difficult to decrease the logit score for all (known) classes." From this, the paper first speculates that it will be easier to cause false familiarity than to cause false novelty. This research investigated adversarial attacks on MLS. In uninformed attacks, where the attacker does not know if a sample is known or novel, false novelty (and false familiarity) attacks are applied on both known and novel samples to destroy the familiarity ranking of samples. In informed attacks, where the attacker knows if a sample is known or novel, a false familiarity (FN) attack is applied to novel samples to increase MLS values. A false novelty (FP) attack is applied to known samples to reduce MLS values. The analysis found that in uninformed settings, as shown in Fig. 2(a) and 2(b), the FP attack is more effective in destroying the rankings than the FN attack. From Fig. 2(c) and 2(d), the paper explained this result with the hypothesis of Dietterich & Guyer. FN attack might effectively increase MLS of both known and novel samples which preserves the familiarity ranking of samples. On the other hand, an FP attack might be more effective in decreasing the MLS of known samples which destroys the familiarity ranking of samples. In informed settings, the research found that the FN attack is more effective in destroying the rankings (shown in Fig. 3). The paper concluded that the results confirm the hypothesis of Dietterich & Guyer. strengths: Due to the real-world application of machine learning models, where it is common to encounter samples from classes unseen during training, OSR is a significant problem. This paper investigated adversarial attacks on OSR. This research was started by a good intuition motivated by the prediction of Dietterich & Guyer. The work tried to investigate several problems regarding adversarial attacks on OSR: relative effectiveness of False Familiarity and False Novelty attacks, one-step versus iterative attacks, benefits of informed attacks, and adversarial attack-based OSR score. Investigated losses for adversarial attacks seem reasonable as well as the proposed Adversarial reaction score (ARS) for OSR. The overall structure of the paper was good. weaknesses: One of the major claims in the paper is that it confirmed the prediction of Dietterich & Guyer: it will be easier to familiarity score than to decrease it (as later requires changing logit/SoftMax of "all" known classes). In lines 268 to 286, the paper derived this from the uninformed attack situation. However, as uninformed attacks are applied on both novel and familiar samples, the majority of MLS change may occur in either novel or familiar samples (not both). While it is counterintuitive, it is still possible that when epsilon is small (around 0.2), most of the MLS increase in FN attacks happened only in familiar samples. In that case, the speculation in lines 271-274 makes less sense. (This speculation is likely correct, but the shown results do not "confirm" the hypothesis of Dietterich & Guyer.) Moreover, when epsilon was large (at least 0.8) in FN attack, Fig. 2(a) and 2(c) show a puzzling phenomenon where MLS was almost recovered while AUROC was low. This phenomenon seems to contradict the claim of ease in increasing the logit score. Because of these issues, we need MLS change plots for novel samples and the corresponding plots, separately, for familiar samples to properly conclude the claim. (Specificity and sensitivity plots for various epsilons can also be helpful. Less important figures like Fig. 2(e), 2(f), and Fig. 3 can be moved to the appendix instead.) For the informed attack situation, the paper used only one epsilon value for each setting. Moreover, larger epsilon values were used for the FN attack than for the FP attack. Hence, it is hard to judge if the FN attack is more effective than the FP attack in informed attack settings. To compare the effectiveness of two attacks, it is better to show the results from at least 2 epsilon values with the same epsilon values for FP and FN attacks. While the paper intuitively explained (Fig. 1 and lines 61-65) that False Novelty attack might be harder as it requires to reduce logits (or SoftMax prediction) for "all (known) classes" (unlike False Familiarity attack where changing "one class" is enough), there is no experiment on the effect of the number of known (familiar) classes. While it is not necessary, identifying the relationship between the number of known (familiar) classes and the relative efficiency of FP/FN attacks in the informed setting can greatly strengthen the paper. It can be confusing for the readers that MLS outperforms MSP in OSR performance (in lines 125 to 128) given their equivalence except for the magnitudes information in logits (as explained by Vaze et al.). It would be nicer if the paper briefly explained how MLS could outperform MSP. It would be nice if the goal of the attacks (destroying ranks of samples) were explained when uninformed attacks were first introduced (lines 210-212). While the paper used a non-traditional approach (RPROP) for iterative adversarial attacks, the paper did not provide or mention any plan for releasing their code. Attack hyperparameters, such as the number of steps, are also missing. I would recommend releasing the code for reproducibility. *Minor comment (not affect rating) While the title and abstract (lines 11-12) seem to indicate that the paper analyzed both familiarity-based Open-Set Recognition (OSR) approaches (MSP+MLS), the paper only experimented with one (MLS) of them. In line 139, there are two periods (typo). The terms "False Familiarity (FN)" and "False Novelty (FP)" attacks are confusing, especially in uninformed settings, as these are applied to both novel and familiar (known) samples. For instance, applying a "False Novelty (FP)" attack on "a novel sample" might result in "a perturbed novel sample" (it is a novel sample, but not a "False novel" sample). Perhaps, terms like "Familiarity (F)" or "Novelty (N)" attacks might be more appropriate in uninformed settings. Personally, using "F" and "N" for abbreviations is less confusing than using "N" and "P". The real class $y$ is actually not used in OSR attacks, unlike closed-set attacks. Hence, y can be removed from OSR attack losses. For instance, $L_{max}(\theta,x,y)$ can be written as $L_{max}(\theta,x)$. The x-axis of Fig. 2(a), 2(b), 2(c), and 2(d) seems to represent the "relative size of epsilon" rather than the "(actual) size of epsilon". In lines 311 to 315, it seems the author(s) mistakenly swapped "FP" and "FN" attacks. In equation (9), the shown formulation can use a different class for $f(x^{adv})$ and $f(x)$. Is it intended or a mistake (perhaps, intended for $f(x^{adv})\_{y^*}-f(x)\_{y^*}$ where $y^*$ is $\text{argmax}\_{y} f(x)$)? Which formulation was used in the implementation? The changes in MLS are not monotonic in Fig. 2(c) and 2(d). Can the author(s) explain possible reasons for this? Perhaps due to linear attack (FGSM)? confidence: 4 justification: The adversarial attacks on OSR are problems that are worth investigating. Moreover, investigating the Familiarity Hypothesis is also important research regarding OSR. The research tried to answer several questions about adversarial attacks on OSR. However, I have concerns regarding the correctness of the derivation of the main claims of the paper, including the confirmation of Dietterich & Guyer’s prediction that the (maximum) logit score can be more easily increased with an adversarial perturbation than decreasing it. These concerns might be easily resolved by providing detailed results (more separated investigation of known and novel samples). Hence, I am leaning toward the rejection of the paper at this stage, but this can be changed when issues are properly addressed later. final_rebuttal_confidence: 4 final_rebuttal_justification: The adversarial attacks on OSR are problems that are worth investigating. Moreover, investigating the Familiarity Hypothesis is also important research regarding OSR. The research tried to answer several questions about adversarial attacks on OSR. Edit: Previously, I had concerns regarding the correctness of the claims (hypothesis of Dietterich & Guyer: the (maximum) logit score can be more easily increased with an adversarial perturbation than decreasing it, fairness of comparing FP and FN efficiency in the informed setting). The author(s) addressed these concerns through updates and responses. I am now leaning toward the acceptance of the work.
0io7gvXniL
World Model Agents with Change-Based Intrinsic Motivation
[ "Jeremias Lino Ferrao", "Rafael F. Cunha" ]
Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
[ "reinforcement learning", "intrinsic rewards", "transfer learning", "sparse reward environments" ]
https://openreview.net/pdf?id=0io7gvXniL
https://openreview.net/forum?id=0io7gvXniL
xe9FqTE0eC
official_review
1,729,020,383,959
0io7gvXniL
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission38/Reviewer_gULj" ]
NLDL.org/2025/Conference
2025
title: World Model Agents with Change-Based Intrinsic Motivation summary: Authors compare models for Reinforcement Learning and they evaluate these models in two environments. strengths: Authors compare several models for Reinforcement Learning and they evaluate these models in two environments. weaknesses: There is lack of novelty, this works do not present relevant models or applications in the real world. confidence: 3 justification: There is lack of novelty, this works do not present relevant models or applications in the real world.
0io7gvXniL
World Model Agents with Change-Based Intrinsic Motivation
[ "Jeremias Lino Ferrao", "Rafael F. Cunha" ]
Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
[ "reinforcement learning", "intrinsic rewards", "transfer learning", "sparse reward environments" ]
https://openreview.net/pdf?id=0io7gvXniL
https://openreview.net/forum?id=0io7gvXniL
vKg43YIaIz
official_review
1,728,923,102,898
0io7gvXniL
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission38/Reviewer_BRjG" ]
NLDL.org/2025/Conference
2025
title: Good comparative study of DreamerV3 and IMPALA agents, with and without CBET, in both tabula rasa and transfer learning settings summary: The paper explores the difficult problem of sparse reward environments in reinforcement learning (RL). It adapts the CBET (Change-Based Exploration Transfer) framework to agents like DreamerV3, comparing it with the IMPALA algorithm in sparse reward environments (specifically Crafter and Minigrid). The goal is to assess both tabula rasa (learning from scratch) and transfer learning settings with intrinsic motivation to address exploration in RL. strengths: 1) Provide an adaptation of the CBET framework to accommodate World Model agents like DreamerV3 during transfer learning. 2) DreamerV3 underperformed compared to the results reported in the Crafter Experiments in the original DreamerV3 paper - good explanation and analysis wrt planning ratio (gap between returns decrease as the planning ratio increase) 3) The paper showcases the importance of carefully selecting exploration strategies and transfer learning mechanisms in sparse reward settings Also, the authors propose a future direction which could be really interesting: developing an intrinsic reward coefficient scheduler weaknesses: 1) Cite CBET and not abbreviate before introducing in abstract (minor comment) 2) The authors say the result on transfer in DreamerV3 might suggest that the policy transfer method proposed in Equation 3 is not particularly effective. Not sure if this is true or can be fully guaranteed? 3) Would have been good if the authors chose another simple environment and obtained similar results - eg. impala doing good in tabula-rasa for minigrid to validate the hypothesis that dreamerv3 overfits? 4) The authors say intrinsic rewards provided by CBET, may not always align with the agent’s learning objectives which is true but even the intrinsic reward coefficients are very low - close to 0 (as in the table in the appendix) - so how would it look like if set to zero? Would have been interesting to see that result confidence: 4 justification: Good analysis and study on CBET for impala and DreamerV3 methods in sparse reward settings.
0io7gvXniL
World Model Agents with Change-Based Intrinsic Motivation
[ "Jeremias Lino Ferrao", "Rafael F. Cunha" ]
Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
[ "reinforcement learning", "intrinsic rewards", "transfer learning", "sparse reward environments" ]
https://openreview.net/pdf?id=0io7gvXniL
https://openreview.net/forum?id=0io7gvXniL
mEIXTwe02O
official_review
1,726,385,908,789
0io7gvXniL
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission38/Reviewer_AN49" ]
NLDL.org/2025/Conference
2025
title: While the experimental results could benefit the community, the authors focus solely on testing existing methods rather than introducing new ones. summary: This paper studies an application of CBET (Change Based Exploration Transfer) to Dreamer V3 model. It compares with an IMPALA model in the Crafter and Minigird environment where multi-task settings are available. The paper shows that Dreamer V3 with CBET generally performs than that of IMPALA but the later shows better performance in simpler settings. strengths: 1. The paper utilized the CBET method on Dreamer. The experimental results indicate that Dreamer outperforms IMPALA in complex settings while not in simpler ones. These findings are likely to be beneficial to the RL community. weaknesses: 1. The paper lacks novelty in sense that it is a simple reporting of an experiment results with already well-known methods (CBET+Dreamer). 2. The paper lacks many details on implementations. For example, I would recommend the authors to include a pseudo-code of the algorithms. 3. The paper lacks comparison with other related works. I think the authors could have included a more variety of comparisons not only IMPALA. For example, what other exploration strategies can we choose other than CBET? 4. The motivation to study Dreamer V3 is not clear. 5. Some terminology is unclear: The authors suggest that IMPALA performs better on Minigrid tasks because Dreamer may have overfitted. However, it's unclear what the authors mean by overfitting in the context of RL, especially when compared to a supervised learning setting. confidence: 3 justification: Overall, the paper does not introduce any new methods but instead reports experimental results of existing ones. Therefore, I am leaning towards rejection for now. final_rebuttal_confidence: 3 final_rebuttal_justification: The author's rebuttal resolved the concerns including the contribution of the work and the overall clarity of the work. Therefore, I would be pleased to see the paper accepted.
0io7gvXniL
World Model Agents with Change-Based Intrinsic Motivation
[ "Jeremias Lino Ferrao", "Rafael F. Cunha" ]
Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
[ "reinforcement learning", "intrinsic rewards", "transfer learning", "sparse reward environments" ]
https://openreview.net/pdf?id=0io7gvXniL
https://openreview.net/forum?id=0io7gvXniL
GHQqeLNopP
meta_review
1,730,310,454,780
0io7gvXniL
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission38/Area_Chair_wXp8" ]
NLDL.org/2025/Conference
2025
metareview: The paper addresses the fundamental problem in RL of sparse reward. The paper proposes to extend the use of CBET from model-free RL to model-based RL. The paper compares the use of CBET with DreamerV3 and IMPALA in two environments (Minigrid and Crafter). The paper is well presented and has been extended based on reviewer comments (e.g. pseudo-code). pro: 1. important problem for RL 2. tested in two environments cons: 1. limited comparison 2. new, but somehow incremental contribution recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 3: The area chair is somewhat confident
0io7gvXniL
World Model Agents with Change-Based Intrinsic Motivation
[ "Jeremias Lino Ferrao", "Rafael F. Cunha" ]
Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
[ "reinforcement learning", "intrinsic rewards", "transfer learning", "sparse reward environments" ]
https://openreview.net/pdf?id=0io7gvXniL
https://openreview.net/forum?id=0io7gvXniL
5xHIn9cs0Z
decision
1,730,901,556,225
0io7gvXniL
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We have decided to offer opportunities for oral presentations in the remaining available slots in the NLDL program. Thus, despite the AC's poster recommendation, we recommend an oral presentation in addition to the poster presentation given the AC's and reviewers' recommendations.
0io7gvXniL
World Model Agents with Change-Based Intrinsic Motivation
[ "Jeremias Lino Ferrao", "Rafael F. Cunha" ]
Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
[ "reinforcement learning", "intrinsic rewards", "transfer learning", "sparse reward environments" ]
https://openreview.net/pdf?id=0io7gvXniL
https://openreview.net/forum?id=0io7gvXniL
5eCkxgDcFS
official_review
1,728,395,415,251
0io7gvXniL
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission38/Reviewer_Fhut" ]
NLDL.org/2025/Conference
2025
title: Review summary: This paper considers the challenge of learning effective policies in sparse-reward environments in Reinforcement learning. They note that both intrinsic rewards (i.e. to provide reward to encourage the visitation of interesting states) with transfer learning (i.e. applying knowledge distilled on other tasks to the current one) have been demonstrated to help with this challenge. An approach called Changed Based Exploration Transfer (CBET) combines both of these ideas. They note that CBET has only been applied within the model-free algorithm IMPALA but has not been extended towards state-of-the-art model-based algorithms like DreamerV3. Their central idea is that this integration may help improve performance. They integrate CBET with DreamerV3 and evaluate its performance on Minigrid and Crafter. They draw comparison to standard DreamerV3 and also compare to IMPALA and IMPALA with CBET. Their experimentation is inconclusive and does not demonstrate a substantial benefit of including CBET into DreamerV3. strengths: - The paper is well-written. This made the paper easy to read and pass. It introduced necessary background succinctly which is commendable especially considering the limited amount of space. Although, I did refer to the main references (e.g. IMPALA and DreamerV3), I don't think it was necessary to do so. - The challenge of sparse environments is a central challenge in RL and one that it is right to spend time studying. Further development of algorithms that make use of transfer learning and intrinsic motivation/reward will likely lead to improvement within this area. - I appreciate the author's attempt to further investigate an existing method CBET within a new algorithm. I think this is often overlooked in RL and I think it is of value to the research community. - The choice of environments is reasonable. Both Crafter and mini-grid present challenges with reward sparsity. Although I have put this under strengths, I do believe this could be further strengthened (see weaknesses 3). weaknesses: - Although the paper is generally well-written, I think it lacks a primary focus. The central contribution appears to be the application of CBET within the model-based algorithm DreamerV3. Unfortunately, it seems to at points to become a comparison between DreamerV3 and IMPALA, instead. This is well-studied within the original DreamerV3 paper and it isn't clear to me what additional benefit their analysis provides. Their ability to perform a comparison is also limited by their inability to conduct experimentation with the planning ratio using in the original DreamerV3 paper. This omission is attributed to hardware restrictions which are understandable but does raise issues. - There is marginal/no difference in final rewards achieved with and without CBET. Given a suitably sparse reward problem, I would have expected conventional exploration methods (e.g. an entropy bonus as in IMPALA and DreamerV3) to provide limited benefit as they do not provide any direction to *unusual* states or regions within the state space. Conceivably, the benefit of an intrinsic reward based mechanism modifies state values so it can direct exploration to regions that are promising. I would then think that in a suitable environment, it would be plausible to demonstrate that an intrinsic reward improves the policy that the agent converges to. As an idea to address this for the authors, I note that there is some difference in sample efficiency in tabula-rasa mini-grid for IMPALA with and without CBET. I'm not familiar with the environment, but could you just make it bigger (and as such more challenging to explore) and then more conclusively show this? I do however note that this would not address weakness 1. - DreamerV3 is state-of-the-art, but as we have observed here, it does not always out perform IMPALA. A clear distinction between these two approaches is that one is model-free and the other is model-based. This would appear to complicate direct comparison between them. Why did you not choose a more recent model-free algorithm instead? - A note on figure 3 bottom left. This is the only plot that appears to show the possibility of CBET improving DreamerV3. In its current form there is still some overlap in the standard deviations. I appreciate that 1M steps is the usual budget, but running this for longer may allow for an easier comparison. confidence: 4 justification: Although the paper explores an important and interesting problem it lacks a central focus. It appears to get caught up in comparison of IMPALA and DreamerV3 rather than focusing on their central objective which is how CBET can help within a more recent algorithm. Given the current results my interpretation is that it does not appear to help. Although they present some ideas to improve upon these methods (e.g. intrinsic reward scheduling), I do not believe there is a clear contribution as of yet and therefore cannot recommend acceptance. final_rebuttal_confidence: 3 final_rebuttal_justification: The paper studies an interesting problem that is of much interest to the RL community. It is mostly well-written and proposes the integration of the CBET mechanism into DreamerV3. This integration is interesting and offers the promise of improving a state-of-the-art model-based algorithm with a mechanism that is mostly limited to model-free algorithms. I think that their paper certainly progresses this idea but that it could be improved through more comprehensive experimentation across a wider range of environments. Currently there are just 2 environments. I do recognise that computational resource limitations may make this kind of analysis difficult but do really believe that it would significantly strengthen the paper. Whilst I have recommended rejection at this stage, I would like to stress that this is a very weak reject and I would not stand in the way of its publication at this venue as I think the topic is sufficiently interesting.
wESwjiOqgz
ProgrEFR: diffondere e promuovere i programmi di ricerca dell’Ecole française de Rome con Wikidata
[ "Elisa Saltetto" ]
Il nostro progetto mira a - promuovere e dare visibilità alle attività scientifiche e ai programmi di ricerca della nostra istituzione - creare dei link tra contenitori di dati di natura complementare (LOD) - rendere accessibili le risorse prodotte dai ricercatori dell’EFR in un’ottica di open data Abbiamo scelto di concentrarci su venti programmi di ricerca, detti strutturanti, che costituiscono una parte trainante dell’attuale attività scientifica dell’EFR. Sono articolati in diverse tematiche di ricerca, ricoprono un ampio arco cronologico (dall’antichità alla storia contemporanea) e, attingendo a diverse metodologie e fonti di ricerca, adottano un approccio multidisciplinare. I programmi strutturanti hanno una durata di quattro o cinque anni. Dato il carattere internazionale dell'istituzione, essi sono realizzati in partenariato con una o più istituzioni straniere o italiane. Per raggiungere questo obiettivo il nostro progetto prevede - la descrizione dei programmi nel database della rete bibliografica delle università francesi (SUDOC/IdRef) - la descrizione dei programmi in HAL (data base in cui i ricercatori affiliati al Ministero della Ricerca e dell’Insegnamento superiore francese versano la loro produzione scientifica con libertà di renderne accessibile o meno i contenuti) - l’inserimento e la descrizione dei programmi in Wikidata Concretamente, per ogni programma sono stati creati gli identificativi IdRef, Hal e gli item Wikidata con i rispettivi rinvii e descrittori adeguati alle caratteristiche proprie ad ogni data base. Gli identificativi propri ad ogni contenitore di dati permettono il dialogo tra i vari sistemi nell’ottica dell’interoperabilità, ricavando dati e informazioni diverse a seconda delle proprietà/ambiti dei vari contenitori. Così IdRef consente essenzialmente una descrizione del programma come un ente autore, Hal raccoglie i relativi studi depositati dai singoli ricercatori e Wikidata diventa soprattutto l’aggregatore che collega le diverse interfacce. ProgrEFR: Disseminating and promoting the research programmes of the Ecole française de Rome with Wikidata Our project aims to - promote and give visibility to the scientific activities and research programmes of our institution - create links between data containers of a complementary nature (LOD) - make the resources produced by EFR researchers accessible in an open data perspective We have chosen to focus on twenty research programmes, known as structuring programmes, which form a driving force in EFR's current scientific activity. They are divided into different research themes, cover a broad chronological span (from antiquity to contemporary history) and, drawing on different research methodologies and sources, adopt a multidisciplinary approach. The structuring programmes have a duration of four or five years. Given the international character of the institution, they are realised in partnership with one or more foreign or Italian institutions. To achieve this goal, our project involves - the description of programmes in the database of the bibliographical network of French universities (SUDOC/IdRef) - the description of the programmes in HAL (a database in which researchers affiliated to the French Ministry of Research and Higher Education deposit their scientific production with freedom to make its contents accessible or not) - the entry and description of programmes in Wikidata Specifically, IdRef, Hal and Wikidata item identifiers were created for each programme, with the respective references and descriptors adapted to the characteristics specific to each data base. The identifiers proper to each data container allow dialogue between the various systems with a view to interoperability, obtaining different data and information depending on the properties/environments of the various containers. Thus IdRef essentially allows a description of the programme as an corporate name heading, Hal collects the relevant studies deposited by individual researchers, and Wikidata becomes above all the aggregator linking the different interfaces.
[ "Ecole française de Rome – programmi di ricerca – Linked Open Data – open science – open data – IdRef – WikiData – HAL – rete universitaria" ]
https://openreview.net/pdf?id=wESwjiOqgz
https://openreview.net/forum?id=wESwjiOqgz
MG8tIuXNSA
official_review
1,736,252,122,223
wESwjiOqgz
[ "everyone" ]
[ "~Rossana_Morriello1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Programmi e temi di ricerca dell'EFR in Wikidata review: Il progetto è chiaro e abbastanza interessante, nonostante la perplessità sulla breve durata dei programmi (4-5 anni dopo i quali vengono aggiornati regolarmente? in che modo?) e l'impatto sulla comunità Wikidata, al di là delle istituzioni coinvolte nei programmi. Due aspetti sui quali suggerirei di soffermarsi. compliance: 4 scientific_quality: 3 originality: 4 impact: 3 confidence: 3
wESwjiOqgz
ProgrEFR: diffondere e promuovere i programmi di ricerca dell’Ecole française de Rome con Wikidata
[ "Elisa Saltetto" ]
Il nostro progetto mira a - promuovere e dare visibilità alle attività scientifiche e ai programmi di ricerca della nostra istituzione - creare dei link tra contenitori di dati di natura complementare (LOD) - rendere accessibili le risorse prodotte dai ricercatori dell’EFR in un’ottica di open data Abbiamo scelto di concentrarci su venti programmi di ricerca, detti strutturanti, che costituiscono una parte trainante dell’attuale attività scientifica dell’EFR. Sono articolati in diverse tematiche di ricerca, ricoprono un ampio arco cronologico (dall’antichità alla storia contemporanea) e, attingendo a diverse metodologie e fonti di ricerca, adottano un approccio multidisciplinare. I programmi strutturanti hanno una durata di quattro o cinque anni. Dato il carattere internazionale dell'istituzione, essi sono realizzati in partenariato con una o più istituzioni straniere o italiane. Per raggiungere questo obiettivo il nostro progetto prevede - la descrizione dei programmi nel database della rete bibliografica delle università francesi (SUDOC/IdRef) - la descrizione dei programmi in HAL (data base in cui i ricercatori affiliati al Ministero della Ricerca e dell’Insegnamento superiore francese versano la loro produzione scientifica con libertà di renderne accessibile o meno i contenuti) - l’inserimento e la descrizione dei programmi in Wikidata Concretamente, per ogni programma sono stati creati gli identificativi IdRef, Hal e gli item Wikidata con i rispettivi rinvii e descrittori adeguati alle caratteristiche proprie ad ogni data base. Gli identificativi propri ad ogni contenitore di dati permettono il dialogo tra i vari sistemi nell’ottica dell’interoperabilità, ricavando dati e informazioni diverse a seconda delle proprietà/ambiti dei vari contenitori. Così IdRef consente essenzialmente una descrizione del programma come un ente autore, Hal raccoglie i relativi studi depositati dai singoli ricercatori e Wikidata diventa soprattutto l’aggregatore che collega le diverse interfacce. ProgrEFR: Disseminating and promoting the research programmes of the Ecole française de Rome with Wikidata Our project aims to - promote and give visibility to the scientific activities and research programmes of our institution - create links between data containers of a complementary nature (LOD) - make the resources produced by EFR researchers accessible in an open data perspective We have chosen to focus on twenty research programmes, known as structuring programmes, which form a driving force in EFR's current scientific activity. They are divided into different research themes, cover a broad chronological span (from antiquity to contemporary history) and, drawing on different research methodologies and sources, adopt a multidisciplinary approach. The structuring programmes have a duration of four or five years. Given the international character of the institution, they are realised in partnership with one or more foreign or Italian institutions. To achieve this goal, our project involves - the description of programmes in the database of the bibliographical network of French universities (SUDOC/IdRef) - the description of the programmes in HAL (a database in which researchers affiliated to the French Ministry of Research and Higher Education deposit their scientific production with freedom to make its contents accessible or not) - the entry and description of programmes in Wikidata Specifically, IdRef, Hal and Wikidata item identifiers were created for each programme, with the respective references and descriptors adapted to the characteristics specific to each data base. The identifiers proper to each data container allow dialogue between the various systems with a view to interoperability, obtaining different data and information depending on the properties/environments of the various containers. Thus IdRef essentially allows a description of the programme as an corporate name heading, Hal collects the relevant studies deposited by individual researchers, and Wikidata becomes above all the aggregator linking the different interfaces.
[ "Ecole française de Rome – programmi di ricerca – Linked Open Data – open science – open data – IdRef – WikiData – HAL – rete universitaria" ]
https://openreview.net/pdf?id=wESwjiOqgz
https://openreview.net/forum?id=wESwjiOqgz
7f5qEPWXUU
official_review
1,736,245,101,124
wESwjiOqgz
[ "everyone" ]
[ "~Lucia_Sardo1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: revisione review: La proposta risulta ben strutturata, con obiettivi di progetto e metodologia impiegata chiari e ben presentati. La realizzazione di questo progetto potrebbe avere senza dubbio un impatto positivo per la conoscenza delle attività dell'EFR e per la promozione di progetti simili presso altre istituzioni simili. Dall'abstract però non è possibile ricavare la struttura dei dati e la modalità operativa che sarà seguita per la realizzazione del progetto stesso. compliance: 4 scientific_quality: 3 originality: 5 impact: 4 confidence: 4
uzYOK3x69D
Wikidata e biblioteche per l'AI
[ "Lorenzo Gobbo", "Elisabetta Zonca" ]
L'intervento propone l'utilizzo di Wikidata come un indice di database e come punto di accesso per la scoperta e la raccolta automatizzata di base dati. Inoltre si propone di presentare delle proposte sul ruolo che Wikidata e biblioteche potranno avere all'interno degli ambiti di ricerca relativi al Natural Language Processing, al Machine Learning e all'Intelligenza Artificiale. La natura aperta dell'ecosistema di Wikidata e la possibilità di editare e creare 'liberamente' nuovi elementi, se da un lato sembra permettere una maggior rappresentatività delle entità del mondo reale rispetto ad altri sistemi chiusi, dall'altra pone una serie di criticità per quanto riguarda l'attendibilità e la qualità dei dati disponibili. Per superare queste criticità si propone l'utilizzo di Wikidata come indice, ovvero come sistema di scoperta e aggregazione di dati autorevoli in grado di condurre, partendo da una entità, a dati ospitati esternamente accessibili in forma machine-readable e direttamente importabili nel proprio sistema in quanto espressi sotto forma di Linked Open Data. Utilizzando l’esempio della collezione iconografica della Biblioteca dell’Accademia di architettura di Mendrisio, questo intervento propone una serie di strategie in grado di condurre a un miglioramento della ricercabilità e del riuso delle collezioni digitali delle biblioteche. Le ricadute che questo approccio consente spaziano dalla ottimizzazione della gestione automatizzata degli authority files, fino alla disseminazione di dati nevralgici ai fini della valorizzazione del patrimonio culturale. Il secondo punto di cui si vuole discutere riguarda il ruolo che le biblioteche potrebbero rivestire nello sviluppo dell’Intelligenza Artificiale, ponendosi come soggetti attivi dello sviluppo della componente software e non solo come utilizzatori finali di prodotti commerciali. In una fase storica, e in un paese come l’Italia, in cui la fase di addestramento delle macchine è una questione aperta sulla quale è possibile agire positivamente, riteniamo sia possibile e necessario ipotizzare un coinvolgimento delle biblioteche. La qualità del risultato prodotto da Modelli Linguistici di grandi dimensioni (LLMs), così come da modelli di dimensioni ridotte più specifici, dipende inevitabilmente dalla quantità e qualità dei dati utilizzati per l’addestramento. Nel nuovo ecosistema digitale le biblioteche possono supportare i propri utenti agendo come data provider per i ricercatori come per i servizi commerciali generalisti.
[ "Biblioteche", "Indici", "LOD", "AI", "Wikidata" ]
https://openreview.net/pdf?id=uzYOK3x69D
https://openreview.net/forum?id=uzYOK3x69D
j9ffvXEUKH
official_review
1,736,309,985,294
uzYOK3x69D
[ "everyone" ]
[ "~Annick_Farina1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: troppo per un'unica relazione review: L'abstract presenta due aspetti che non sembrano direttamente collegati: un modo di ottimizzare la gestione automatizzata degli authority files nelle biblioteche e il ruolo che le biblioteche potrebbero rivestire nello sviluppo dell’Intelligenza Artificiale. Il legame fra le varie parte della proposta non è molto chiaro. compliance: 3 scientific_quality: 3 originality: 4 impact: 2 confidence: 2
uzYOK3x69D
Wikidata e biblioteche per l'AI
[ "Lorenzo Gobbo", "Elisabetta Zonca" ]
L'intervento propone l'utilizzo di Wikidata come un indice di database e come punto di accesso per la scoperta e la raccolta automatizzata di base dati. Inoltre si propone di presentare delle proposte sul ruolo che Wikidata e biblioteche potranno avere all'interno degli ambiti di ricerca relativi al Natural Language Processing, al Machine Learning e all'Intelligenza Artificiale. La natura aperta dell'ecosistema di Wikidata e la possibilità di editare e creare 'liberamente' nuovi elementi, se da un lato sembra permettere una maggior rappresentatività delle entità del mondo reale rispetto ad altri sistemi chiusi, dall'altra pone una serie di criticità per quanto riguarda l'attendibilità e la qualità dei dati disponibili. Per superare queste criticità si propone l'utilizzo di Wikidata come indice, ovvero come sistema di scoperta e aggregazione di dati autorevoli in grado di condurre, partendo da una entità, a dati ospitati esternamente accessibili in forma machine-readable e direttamente importabili nel proprio sistema in quanto espressi sotto forma di Linked Open Data. Utilizzando l’esempio della collezione iconografica della Biblioteca dell’Accademia di architettura di Mendrisio, questo intervento propone una serie di strategie in grado di condurre a un miglioramento della ricercabilità e del riuso delle collezioni digitali delle biblioteche. Le ricadute che questo approccio consente spaziano dalla ottimizzazione della gestione automatizzata degli authority files, fino alla disseminazione di dati nevralgici ai fini della valorizzazione del patrimonio culturale. Il secondo punto di cui si vuole discutere riguarda il ruolo che le biblioteche potrebbero rivestire nello sviluppo dell’Intelligenza Artificiale, ponendosi come soggetti attivi dello sviluppo della componente software e non solo come utilizzatori finali di prodotti commerciali. In una fase storica, e in un paese come l’Italia, in cui la fase di addestramento delle macchine è una questione aperta sulla quale è possibile agire positivamente, riteniamo sia possibile e necessario ipotizzare un coinvolgimento delle biblioteche. La qualità del risultato prodotto da Modelli Linguistici di grandi dimensioni (LLMs), così come da modelli di dimensioni ridotte più specifici, dipende inevitabilmente dalla quantità e qualità dei dati utilizzati per l’addestramento. Nel nuovo ecosistema digitale le biblioteche possono supportare i propri utenti agendo come data provider per i ricercatori come per i servizi commerciali generalisti.
[ "Biblioteche", "Indici", "LOD", "AI", "Wikidata" ]
https://openreview.net/pdf?id=uzYOK3x69D
https://openreview.net/forum?id=uzYOK3x69D
TP4gFMR8TY
official_review
1,736,249,799,872
uzYOK3x69D
[ "everyone" ]
[ "~Rossana_Morriello1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: L'obiettivo non è chiaro review: L'obiettivo del contributo non è chiaro in rapporto al tema della conferenza che è "Wikidata and Research". Si affiancano tanti, troppi temi senza esplicitare il legame tra loro, il legame con il tema della conferenza e il modo in cui saranno trattati (Wikidata come indice, disseminazioni di dati per valorizzare le collezioni, riuso delle collezioni digitali, il caso della Biblioteca di Mendrisio, il ruolo dei bibliotecari nello sviluppo dei software, i LLM. Ciascuno di questi aspetti potrebbe essere oggetto di una relazione. Tutti insieme mi sembrano molto poco realisticamente affrontabili con sufficiente profondità in una relazione al convegno. Inoltre, non sembrano direttamente collegati al focus del convegno ("open data, collaborative open research infrastructures and research assessment"). compliance: 3 scientific_quality: 3 originality: 3 impact: 3 confidence: 3 notes: Suggerisco di concentrarsi su un paio di questioni al massimo e delineare meglio l'obiettivo del contributo.
ucYfDgIRFC
Modeling Architectural Heritage in Wikidata: A Case Study of Early Modern European Hospitals
[ "Frieder Leipold", "Maximilian Kristen", "Lily Marie Baumeister", "Isabella Limmer", "Chiara Franceschini" ]
Architectural heritage monuments are complex cultural artifacts that evolve over time, embodying multiple uses, users and architectural transformations. This paper introduces a new modeling framework for architectural heritage in Wikidata, developed within the context of the ERC project ARCHIATER - Heritage of Disease: The Art and Architectures of Early Modern Hospitals in European Cities. By focusing on hospitals as architectural and social landmarks, this work refines and extends existing data structures for architecture in Wikidata to capture building uses, modifications, and art historical objects associated with these institutions. Our approach leverages Wikidata's ontology to incorporate more detailed temporal and provenance data, enabling the documentation of building histories, artistic objects, and their trajectories within the broader European context (see Fig 1.), while utilizing WikiFAIR methodologies to ensure data sustainability and maximize reuse across research domains. Through this enhanced modeling, we aim to create a rich, interconnected dataset that serves both academic and public audiences interested in art history, history of medicine, and architectural heritage. The paper highlights challenges and solutions in normalizing the existing and newly added disparate datasets from literature and other research projects, and discusses strategies to bridge gaps between scholarly research and Wikidata’s open, collaborative environment. We also explore the potential for data visualizations and analyses that provide new insights into the role of hospitals as cultural and architectural nodes (see Fig 2). This case study not only advances the use of Wikidata for modeling complex historical phenomena but also underscores its potential as a research infrastructure for interdisciplinary studies. By fostering collaboration between academic and Wikimedia communities, this work demonstrates how structured open data can illuminate the dynamic histories of architectural heritage monuments while contributing to broader Open Science objectives.
[ "Wikidata", "Architectural Heritage", "Data Modelling", "Temporal Modeling", "Early Modern Hospitals", "WikiFAIR" ]
https://openreview.net/pdf?id=ucYfDgIRFC
https://openreview.net/forum?id=ucYfDgIRFC
VgooYXEJnJ
official_review
1,736,692,982,315
ucYfDgIRFC
[ "everyone" ]
[ "~Iolanda_Pensa1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Interesting topic but not sufficient elements to understand the relevance review: Exploring the uses and histories of hospitals seems to me very interesting, and I find it relevant and useful to create a link between Wikidata and the research project ARCHIATER - Heritage of Disease: The Art and Architectures of Early Modern Hospitals in European Cities (2024-2028). The abstract, though, doesn't make any reference to what is already available on Wikidata and what is the new modelling framework for architectural heritage. In Fig 1 I see on a map data about some European hospitals; when I select the items, they do not present data related to building uses, modifications, and art historical objects associated with these institutions. Looking online I found https://archiater.hypotheses.org/1015 and https://www.wikidata.org/wiki/Wikidata:WikiProject_Spital The abstract doesn't provide sufficient insight to understand what the new framework is about and the case study appears – from the abstract and the info available online – to be in a very early stage, without uploads. There seems to be no example available of this new framework applied to Wikidata and the new framework is not described on https://www.wikidata.org/wiki/Wikidata:WikiProject_Spital. WikiFAIR is a nice initiative created by Max Kristen in 2023 and submitted by Max Kristen and Frieder Leipoldto the Wikimedia Foundation https://openreview.net/pdf?id=LFIfMNbz77. it would nice to connect it better with the current project about Open Science and the Wikimedia projects (Daniel Mietchen has worked and is working extensively on this). compliance: 4 scientific_quality: 3 originality: 3 impact: 4 confidence: 3 notes: Maybe the authors can provide an example of Wikidata item updated with the new framework or information related to the framework on https://www.wikidata.org/wiki/Wikidata:WikiProject_Spital. The topic – at the moment not completely developed – could be adequate for a lightening talk or a poster.
rqmdxlzHpD
An ontology for Italian theatrical cultural heritage on wikibase.cloud
[ "Donatella Gavrilovich", "Giovanni Bergamin", "Valeria Paraninfi" ]
The aim of the Hyperstage project is to create an Open Knowledge Base for the semantic reconstruction of theatrical productions through the harvesting and processing resources from the New Italian Network of theatrical digital archives, supported by the Ministry of University and funded by the European Union-Next Generation EU. If we want to build services ‘semantic web’ aware, a domain specific ontology is needed. One useful starting point for identifying existing initiatives is undoubtedly the work done by Wikidata:WikiProject Performing arts group. Taking into account that the theatrical domain lacks a single, universally adopted ontology we decided to develop a bottom-up ontology. We decided to start with a ready-to-use technological solution, namely Wikibase.cloud. In this context, Wikibase emerges as one of the most suitable platforms due to its data model. This model allows for the enrichment of RDF statements with qualifiers, which is particularly valuable for capturing the complexities of theatrical productions. According to a recent survey of real databases documenting theater performances, most data models rely on distinction between theatrical creative work and theatrical production claiming also the need to adapt bibliographic models (namely IFLA LRM) to the specific domain of theatrical performances. It is important to note that theatrical creative work, as an intellectual construct, is an abstraction that facilitates the reference to a common identity beyond specific theatrical productions. Consequently, leveraging on this distinction each theatrical production, while unique, will be placed within a broader historical and cultural context through a system of hierarchical relationships highlighting its connections to theatrical tradition. This ontological hierarchy will allow us to trace the evolutionary path of works and identify influences and connections between different productions. The Hyperstage project also aims to overcome the traditional limitations of theatrical documentation, offering an innovative solution for the collection, organization, and valorization of metadata related to theatrical productions. Archiving performative assets is considered a daunting task, given the ephemeral nature of theatrical events. However, digital technologies offer new perspectives to preserve and give value to intangible cultural heritage. Hyperstage aims to enhance the value of Italy's performing arts cultural heritage by facilitating access to and interoperability of a vast corpus of digital resources associated with each theatrical production. During ontology development, we investigated different methods for linking digital resources to theatrical productions. A key objective is ensuring efficient user access to these resources. We explored three approaches, using string property or element property each with its own limitations and potentials. To facilitate user access to digital objects, the Hyperstage ontology utilizes a PKB taxonomy. This taxonomy categorizes digital objects by specific types and assigns them to one of three phases of theatrical production: a) conception, b) staging, and c) post-production documentation. The PKB code streamlines the organization of digital assets related to theatrical production. By automatically aggregating each digital object into one of these three phases, the code addresses challenges posed by the sheer volume and diverse nature of digital resources. This logical temporal classification simplifies the management of large datasets and unifies heterogeneous documents starting from two broad categories: "pictures" and "attachments".
[ "IFLA LRM", "Theatrical productions", "Theatrical works", "Digital objects", "Bottom-up ontology", "Cultural heritage", "Digitalization", "Wikibase data model", "Performing arts" ]
https://openreview.net/pdf?id=rqmdxlzHpD
https://openreview.net/forum?id=rqmdxlzHpD
jYzd6J15Pk
official_review
1,736,753,656,174
rqmdxlzHpD
[ "everyone" ]
[ "~Carlo_Bianchini1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: A Wikibase instance for the collection, organization, and valorization of metadata related to theatrical productions review: The proposal investigates the possibility to create a dataset that leverage on the distinction between theatrical creative work and theatrical production, and offers a methodologicallty innovative and relevant solution for the collection, organization, and valorization of metadata related to theatrical productions. Of great interest the teonomy based on the three different categories of a) conception, b) staging, and c) post-production documentation. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 4
rqmdxlzHpD
An ontology for Italian theatrical cultural heritage on wikibase.cloud
[ "Donatella Gavrilovich", "Giovanni Bergamin", "Valeria Paraninfi" ]
The aim of the Hyperstage project is to create an Open Knowledge Base for the semantic reconstruction of theatrical productions through the harvesting and processing resources from the New Italian Network of theatrical digital archives, supported by the Ministry of University and funded by the European Union-Next Generation EU. If we want to build services ‘semantic web’ aware, a domain specific ontology is needed. One useful starting point for identifying existing initiatives is undoubtedly the work done by Wikidata:WikiProject Performing arts group. Taking into account that the theatrical domain lacks a single, universally adopted ontology we decided to develop a bottom-up ontology. We decided to start with a ready-to-use technological solution, namely Wikibase.cloud. In this context, Wikibase emerges as one of the most suitable platforms due to its data model. This model allows for the enrichment of RDF statements with qualifiers, which is particularly valuable for capturing the complexities of theatrical productions. According to a recent survey of real databases documenting theater performances, most data models rely on distinction between theatrical creative work and theatrical production claiming also the need to adapt bibliographic models (namely IFLA LRM) to the specific domain of theatrical performances. It is important to note that theatrical creative work, as an intellectual construct, is an abstraction that facilitates the reference to a common identity beyond specific theatrical productions. Consequently, leveraging on this distinction each theatrical production, while unique, will be placed within a broader historical and cultural context through a system of hierarchical relationships highlighting its connections to theatrical tradition. This ontological hierarchy will allow us to trace the evolutionary path of works and identify influences and connections between different productions. The Hyperstage project also aims to overcome the traditional limitations of theatrical documentation, offering an innovative solution for the collection, organization, and valorization of metadata related to theatrical productions. Archiving performative assets is considered a daunting task, given the ephemeral nature of theatrical events. However, digital technologies offer new perspectives to preserve and give value to intangible cultural heritage. Hyperstage aims to enhance the value of Italy's performing arts cultural heritage by facilitating access to and interoperability of a vast corpus of digital resources associated with each theatrical production. During ontology development, we investigated different methods for linking digital resources to theatrical productions. A key objective is ensuring efficient user access to these resources. We explored three approaches, using string property or element property each with its own limitations and potentials. To facilitate user access to digital objects, the Hyperstage ontology utilizes a PKB taxonomy. This taxonomy categorizes digital objects by specific types and assigns them to one of three phases of theatrical production: a) conception, b) staging, and c) post-production documentation. The PKB code streamlines the organization of digital assets related to theatrical production. By automatically aggregating each digital object into one of these three phases, the code addresses challenges posed by the sheer volume and diverse nature of digital resources. This logical temporal classification simplifies the management of large datasets and unifies heterogeneous documents starting from two broad categories: "pictures" and "attachments".
[ "IFLA LRM", "Theatrical productions", "Theatrical works", "Digital objects", "Bottom-up ontology", "Cultural heritage", "Digitalization", "Wikibase data model", "Performing arts" ]
https://openreview.net/pdf?id=rqmdxlzHpD
https://openreview.net/forum?id=rqmdxlzHpD
CqHgR58QfB
official_review
1,736,527,987,085
rqmdxlzHpD
[ "everyone" ]
[ "~Silvia_Bruni1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: An ontology for theatrical productions in Wikidata: enhancig the organization and accessibility of data related to the performing arts review: The project, working on an ontology, is undoubtedly very innovative within the context of Wikidata. It addresses, moreover, a set of highly complex data concerning theatrical performances. These digital objects, as emphasized in the abstract, refer to different phases of production: conception, staging, and documentation. The proposed and tested solutions certainly deserve to be illustrated and showcased. It is noteworthy that an ontology concerning works that are inherently ephemeral holds added value in terms of data preservation. The project aims to transcend the traditional limits of theatrical documentation, which are not entirely resolved by bibliographic models such as IFLA LRM. The project aims to be a solution for the enhancement of digital theatrical archives through various innovative strategies and advanced technologies. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 4
qqynzN8K5P
THE "MARE MAGNUM" ON WIKIDATA
[ "VALENTINA SONZINI" ]
The Marucelli's "Mare Magnum" is one of the most interesting and least studied bibliographic repertories in the world. Thanks to an agreement between the SAGAS Department of the University of Florence and the Biblioteca Marucelliana, which holds the eighteenth-century manuscript, a series of studies by students of bibliography and the history of the book has brought to light the authors and printers cited by Marucelli. These data, checked against authority files, formed the basis of a massive insertion in wikidata, linking the "Mare Magnum" to its bibliographical content.
[ "Mare Magnum", "Bibliography", "History of printing" ]
https://openreview.net/pdf?id=qqynzN8K5P
https://openreview.net/forum?id=qqynzN8K5P
LwJqiICmII
official_review
1,736,494,731,763
qqynzN8K5P
[ "everyone" ]
[ "~Monica_Berti1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: A poster on the results of a valuable completed project on the massive insertion of bibliographic data into Wikidata review: The author wants to present a poster to show the work done by students of bibliography and book history to bring to light the authors and printers cited in the Mare Magnum of Francesco Marucelli. This data is now part of Wikidata and is very important because it links the Mare Magnum to its bibliographic content. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 4
qqynzN8K5P
THE "MARE MAGNUM" ON WIKIDATA
[ "VALENTINA SONZINI" ]
The Marucelli's "Mare Magnum" is one of the most interesting and least studied bibliographic repertories in the world. Thanks to an agreement between the SAGAS Department of the University of Florence and the Biblioteca Marucelliana, which holds the eighteenth-century manuscript, a series of studies by students of bibliography and the history of the book has brought to light the authors and printers cited by Marucelli. These data, checked against authority files, formed the basis of a massive insertion in wikidata, linking the "Mare Magnum" to its bibliographical content.
[ "Mare Magnum", "Bibliography", "History of printing" ]
https://openreview.net/pdf?id=qqynzN8K5P
https://openreview.net/forum?id=qqynzN8K5P
FJ9H3uqQc3
official_review
1,737,041,635,556
qqynzN8K5P
[ "everyone" ]
[ "~Elena_Marangoni1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: A source of great value review: Un ottimo esempio di progetto per importare e valorizzare una fonte in Wikidata e collegarla ad altri dati. Molto interessante l'arricchimento grazie agli studi su di essa che hanno permesso di riconoscere autori e stampatori citati. Adatta la forma del poster. compliance: 5 scientific_quality: 5 originality: 4 impact: 4 confidence: 5
qIawraxzwP
test
[ "Elena Marangoni" ]
test
[ "test" ]
https://openreview.net/pdf?id=qIawraxzwP
https://openreview.net/forum?id=qIawraxzwP
NuUEOhuy4Q
official_review
1,734,961,622,359
qIawraxzwP
[ "everyone" ]
[ "~Maria_Iolanda_Pensa1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Prova di review review: Questa review non fornisce indicazioni su come va fatta. rating: 1 confidence: 5
pGhAqNJQMB
La bibliografia di Giacomo Caputo e il ruolo di Wikidata per la sua valorizzazione
[ "Manuela Parrilli" ]
La bibliografia di Giacomo Caputo e il ruolo di Wikidata per la sua valorizzazione Il fondo librario di Giacomo Caputo (1901-1992), di proprietà dell’Università di Firenze e conservato presso il Museo e Istituto Fiorentino di Preistoria "Paolo Graziosi", costituisce una risorsa preziosa per indagare la produzione scientifica e il contesto intellettuale di uno dei protagonisti dell’archeologia del Novecento. Il progetto di ricerca “Il fondo archivistico e librario di Giacomo Caputo: archeologia e restauro architettonico in una biblioteca d’autore” si propone anche di valorizzare la sua bibliografia, includendo contributi inediti o poco noti della sua produzione letteraria emersi durante il censimento del fondo: si tratta di interventi, recensioni, note metodologiche appartenenti alla cosiddetta “letteratura grigia”, nascosti all’interno di riviste, periodici e bollettini, ma che concorrono alla ricostruzione della bibliografia completa dell’autore. Attraverso Wikidata, il progetto mira a trasformare la bibliografia di Caputo in un ecosistema dinamico grazie all’interazione dei dati con altri elementi rilevanti, favorendo l’accessibilità globale, l’interoperabilità e la data visualization, superando quindi i limiti delle bibliografie statiche. Questa strategia non solo potenzia il corpus bibliografico, ma permette anche di restituire al vasto pubblico dati verificati e interrogabili, alimentando nuove suggestioni e contribuendo a tracciare nuove strade per la ricerca. Il progetto si inserisce nel dibattito attuale sull’utilizzo di metodologie innovative per la valorizzazione di bibliografie e biblioteche d’autore, proponendo un modello replicabile per la gestione e la valorizzazione di queste importanti raccolte. [La presentazione avverrà in italiano.]
[ "bibliografia; biblioteca d’autore; gestione fondi personali" ]
https://openreview.net/pdf?id=pGhAqNJQMB
https://openreview.net/forum?id=pGhAqNJQMB
qIqRq065Q9
official_review
1,736,254,925,743
pGhAqNJQMB
[ "everyone" ]
[ "~Elena_Marangoni1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Wikidata for a personal library and bibliography review: The proposal is very interesting for the idea of a enriched and dynamic bibliography, made possible thanks to the interaction with other data in Wikidata. I think the most valuable aspect is that of integration and data visualization for the discovery of new knowledge about the author and his milieu. compliance: 4 scientific_quality: 4 originality: 4 impact: 3 confidence: 4
pGhAqNJQMB
La bibliografia di Giacomo Caputo e il ruolo di Wikidata per la sua valorizzazione
[ "Manuela Parrilli" ]
La bibliografia di Giacomo Caputo e il ruolo di Wikidata per la sua valorizzazione Il fondo librario di Giacomo Caputo (1901-1992), di proprietà dell’Università di Firenze e conservato presso il Museo e Istituto Fiorentino di Preistoria "Paolo Graziosi", costituisce una risorsa preziosa per indagare la produzione scientifica e il contesto intellettuale di uno dei protagonisti dell’archeologia del Novecento. Il progetto di ricerca “Il fondo archivistico e librario di Giacomo Caputo: archeologia e restauro architettonico in una biblioteca d’autore” si propone anche di valorizzare la sua bibliografia, includendo contributi inediti o poco noti della sua produzione letteraria emersi durante il censimento del fondo: si tratta di interventi, recensioni, note metodologiche appartenenti alla cosiddetta “letteratura grigia”, nascosti all’interno di riviste, periodici e bollettini, ma che concorrono alla ricostruzione della bibliografia completa dell’autore. Attraverso Wikidata, il progetto mira a trasformare la bibliografia di Caputo in un ecosistema dinamico grazie all’interazione dei dati con altri elementi rilevanti, favorendo l’accessibilità globale, l’interoperabilità e la data visualization, superando quindi i limiti delle bibliografie statiche. Questa strategia non solo potenzia il corpus bibliografico, ma permette anche di restituire al vasto pubblico dati verificati e interrogabili, alimentando nuove suggestioni e contribuendo a tracciare nuove strade per la ricerca. Il progetto si inserisce nel dibattito attuale sull’utilizzo di metodologie innovative per la valorizzazione di bibliografie e biblioteche d’autore, proponendo un modello replicabile per la gestione e la valorizzazione di queste importanti raccolte. [La presentazione avverrà in italiano.]
[ "bibliografia; biblioteca d’autore; gestione fondi personali" ]
https://openreview.net/pdf?id=pGhAqNJQMB
https://openreview.net/forum?id=pGhAqNJQMB
VNv5xtr6fY
official_review
1,736,775,969,563
pGhAqNJQMB
[ "everyone" ]
[ "~Silvia_Bruni1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Wikidata per la creazione di una bibliografia dinamica e la valorizzazione un un fondo di letterature grigia review: La sperimentazione di Wikidata come strumento per la creazione di una bibliografia dinamica è un in argomento di grande interesse. Il fatto che riguardi un fondo di letteratura grigia e che comprenda spogli, difficili da reperibili, è un merito ulteriore del progetto, così come l’arricchimento e la valorizzazione dei dati riferiti all'autore e al suo contesto disciplinare di riferimento. compliance: 5 scientific_quality: 4 originality: 4 impact: 4 confidence: 4
lYwXKrxmGj
Wikidata and research: a decadal view
[ "Daniel Mietchen" ]
The breadth, depth and diversity of interactions between the research ecosystem and the ecosystem around Wikidata have evolved considerably over the years. These interactions include, for instance, research about Wikidata and Wikibase matters as well as research-related content, infrastructure or activities with a Wikidata or Wikibase component. This contribution is intended for researchers of any background as well as science communicators and other stakeholders in the research landscape. It aims to shine some spotlights on key facets of these interactions and their evolution, considering developments within and beyond both ecosystems, such as the trends towards increased openness in research workflows or towards using Wikibase instances as a platform for collaborative and multilingual curation of structured data. Based on patterns of opportunities and challenges observed for such interactions so far, we will explore some of their major drivers and ponder a set of potential pathways into the future, taking into account multiple dimensions, including alignment between workflows in both ecosystems as well as the sustainability of their respective infrastructure, content and communities. Last but not least, the accepted submissions to the conference will be situated in this evolving landscape and considered together with facets receiving less attention.
[ "Wikidata", "open science", "metascience" ]
https://openreview.net/pdf?id=lYwXKrxmGj
https://openreview.net/forum?id=lYwXKrxmGj
dpFSgS7DQs
official_review
1,736,015,793,865
lYwXKrxmGj
[ "everyone" ]
[ "~Luca_Martinelli1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Approvo review: Mietchen è uno storico contributore di Wikidata e uno dei principali motori dietro il progetto Scholia, dedicato alla mappatura di paper scientifici su Wikidata. L'approccio mostrato nell'abstract è di sicuro interesse per discutere il futuro di Wikidata e della federazione di Wikibase esistenti, in particolare data l'esperienza dell'autore. Di sicuro la sua prospettiva sarà un interessante inizio di una discussione che la comunità deve avere per garantire la sostenibilità e la rilevanza del progetto sul medio-lungo periodo. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 5
lYwXKrxmGj
Wikidata and research: a decadal view
[ "Daniel Mietchen" ]
The breadth, depth and diversity of interactions between the research ecosystem and the ecosystem around Wikidata have evolved considerably over the years. These interactions include, for instance, research about Wikidata and Wikibase matters as well as research-related content, infrastructure or activities with a Wikidata or Wikibase component. This contribution is intended for researchers of any background as well as science communicators and other stakeholders in the research landscape. It aims to shine some spotlights on key facets of these interactions and their evolution, considering developments within and beyond both ecosystems, such as the trends towards increased openness in research workflows or towards using Wikibase instances as a platform for collaborative and multilingual curation of structured data. Based on patterns of opportunities and challenges observed for such interactions so far, we will explore some of their major drivers and ponder a set of potential pathways into the future, taking into account multiple dimensions, including alignment between workflows in both ecosystems as well as the sustainability of their respective infrastructure, content and communities. Last but not least, the accepted submissions to the conference will be situated in this evolving landscape and considered together with facets receiving less attention.
[ "Wikidata", "open science", "metascience" ]
https://openreview.net/pdf?id=lYwXKrxmGj
https://openreview.net/forum?id=lYwXKrxmGj
VaIqTFIQL4
official_review
1,736,108,363,662
lYwXKrxmGj
[ "everyone" ]
[ "~Camillo_Carlo_Pellizzari_di_San_Girolamo1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: A much needed overview of research with and about Wikidata and the Wikibase ecosystem review: The paper deals with the relationship between research projects and "the ecosystem around Wikidata", considering not only Wikidata itself, but also the increasing importance of other Wikibase instances interconnected with it; given the fast evolution of these environments in the last years, and especially the concerns about the sustainability of Wikidata's growth, an in-depth reflection on the future evolution of this interaction is strongly needed and should also be one of the main topics of this conference. The author has an excellent knowledge of the representation of scientific literature in Wikidata and, more generally, of the interaction between Wikidata and research, and has already published important papers about this theme. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 5
jXtR2U51ce
Una proposta per la gestione dei fondi personali in Wikidata
[ "Tania Maio" ]
I fondi personali sono “complessi organici di materiali editi e/o inediti raccolti e/o prodotti da persone significative del mondo della cultura, delle professioni e delle arti prevalentemente dalla seconda metà del XIX secolo in poi”. Anche se tali complessi si declinano in tipologie documentarie differenti (biblioteche d’autore, archivi di persona, archivi culturali), l’elemento aggregatore rimane l’individuo e dunque il corpus è documento e testimone degli interessi, delle attività e delle relazioni della persona nel contesto storico e culturale in cui ha operato. La maggiore criticità nella gestione di tali complessi è la descrizione catalografica, a causa delle tipologie differenti di documenti e oggetti da descrivere, il che presuppone l’ utilizzo di standard specifici per ciascuna tipologia di documento. Gli esperimenti portati avanti per testare la validità di Wikidata per la descrizione analitica, esemplare per esemplare, di tali fondi hanno portato alla conclusione che non è opportuno usare Wikidata in tal senso. Migliore risulta essere infatti l’uso di un’istanza Wikibase adattata a questo scopo. Tuttavia Wikidata è molto efficace nell’attività di descrizione di un fondo personale trattato nel suo insieme. Descrivere un fondo personale in Wikidata permette di inserire il fondo in una rete di relazioni i cui nodi sono rappresentati dal soggetto produttore, dal soggetto conservatore, dai precedenti possessori, dal luogo di conservazione, dagli autori di note, dediche e postille rintracciate nei volumi ecc. Tali relazioni varcano i confini di interesse dell’ente conservatore per ripristinare connessioni andate perdute per varie cause, tra cui lo smembramento dei fondi personali e la loro conservazione presso enti culturali diversi. Altro vantaggio è la possibilità di aumentare la conoscenza sul materiale documentario e sul suo possessore, grazie ai dati provenienti da fonti di informazioni differenti, che convergono in Wikidata. Dalle ricerche effettuate risulta che, pur essendo molti i fondi descritti in Wikidata, tuttavia tali item utilizzano un data model non standardizzato e un “vocabolario” dei termini usati per le etichette non controllato. Questa disomogeneità provoca una grande dispersione dei dati e la difficoltà nel rintracciare tutti gli elementi che si riferiscono alla tipologia di fondo personale. La creazione di un Wikidata:Wikiproject dedicato potrebbe risolvere tale problematica, fungendo da punto di raccordo e fonte di buone pratiche per quanti desiderano inserire un fondo personale in Wikidata. La relazione proporrà la creazione di tale Wikiproject e un data model “fondo personale”, che andrà a ricalcare le informazioni presenti nella scheda di rilevazione fondi a cui la Commissione nazionale Biblioteche speciali, archivi e biblioteche d’autore AIB sta lavorando e che sarà resa disponibile ai colleghi bibliotecari nel corso del 2025. In questo modo le due attività e la loro comunicazione potrebbero camminare parallelamente riuscendo ad intercettare più professionisti delle istituzioni GLAM sia nell’uso della scheda rilevazione fondo personale all’interno della propria istituzione, sia nella corrispondente creazione di un item in Wikidata relativo al fondo personale a cui si sta lavorando. Gli scopi del progetto saranno: l'inserimento, arricchimento e valorizzazione di dati sui fondi personali in Wikidata; l'implementazione e il mantenimento di ontologie e thesauri multilingue relativi alla descrizione dei fondi personali; l'interconnessione tra i cataloghi delle collezioni e Wikidata; l'inclusione dei dati in Wikipedia e nei suoi progetti gemelli.
[ "fondi personali", "special collections", "biblioteche d'autore", "wikidata" ]
https://openreview.net/pdf?id=jXtR2U51ce
https://openreview.net/forum?id=jXtR2U51ce
zYTNdhrMVU
official_review
1,736,521,109,833
jXtR2U51ce
[ "everyone" ]
[ "~Silvia_Bruni1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Wikiproject per l'inserimento di fondi personali in Wikidata review: La proposta è di grande interesse perché riflette su un modello che metta in relazione la descrizione di fondi personali su Wikidate e uno strumento che sarà utilizzato nelle biblioteche italiane (la scheda di rilevazione fondi della Commissione nazionale Biblioteche speciali, archivi e biblioteche d’autore AIB). Si prende anche in considerazione l'interrelazione tra Wikibase e Wikidata (per la descrizione analitica dei fondi il primo e del fondi personali nel loro insieme il secondo). La sperimentazione di un Wikiproject contibuirà ad arrichire gli strumenti disponibili per la comunità degli wikidatiani, in particolare per le istituzioni GLAM che potrranno adottare un percorso di lavoro gia testato. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 4
jXtR2U51ce
Una proposta per la gestione dei fondi personali in Wikidata
[ "Tania Maio" ]
I fondi personali sono “complessi organici di materiali editi e/o inediti raccolti e/o prodotti da persone significative del mondo della cultura, delle professioni e delle arti prevalentemente dalla seconda metà del XIX secolo in poi”. Anche se tali complessi si declinano in tipologie documentarie differenti (biblioteche d’autore, archivi di persona, archivi culturali), l’elemento aggregatore rimane l’individuo e dunque il corpus è documento e testimone degli interessi, delle attività e delle relazioni della persona nel contesto storico e culturale in cui ha operato. La maggiore criticità nella gestione di tali complessi è la descrizione catalografica, a causa delle tipologie differenti di documenti e oggetti da descrivere, il che presuppone l’ utilizzo di standard specifici per ciascuna tipologia di documento. Gli esperimenti portati avanti per testare la validità di Wikidata per la descrizione analitica, esemplare per esemplare, di tali fondi hanno portato alla conclusione che non è opportuno usare Wikidata in tal senso. Migliore risulta essere infatti l’uso di un’istanza Wikibase adattata a questo scopo. Tuttavia Wikidata è molto efficace nell’attività di descrizione di un fondo personale trattato nel suo insieme. Descrivere un fondo personale in Wikidata permette di inserire il fondo in una rete di relazioni i cui nodi sono rappresentati dal soggetto produttore, dal soggetto conservatore, dai precedenti possessori, dal luogo di conservazione, dagli autori di note, dediche e postille rintracciate nei volumi ecc. Tali relazioni varcano i confini di interesse dell’ente conservatore per ripristinare connessioni andate perdute per varie cause, tra cui lo smembramento dei fondi personali e la loro conservazione presso enti culturali diversi. Altro vantaggio è la possibilità di aumentare la conoscenza sul materiale documentario e sul suo possessore, grazie ai dati provenienti da fonti di informazioni differenti, che convergono in Wikidata. Dalle ricerche effettuate risulta che, pur essendo molti i fondi descritti in Wikidata, tuttavia tali item utilizzano un data model non standardizzato e un “vocabolario” dei termini usati per le etichette non controllato. Questa disomogeneità provoca una grande dispersione dei dati e la difficoltà nel rintracciare tutti gli elementi che si riferiscono alla tipologia di fondo personale. La creazione di un Wikidata:Wikiproject dedicato potrebbe risolvere tale problematica, fungendo da punto di raccordo e fonte di buone pratiche per quanti desiderano inserire un fondo personale in Wikidata. La relazione proporrà la creazione di tale Wikiproject e un data model “fondo personale”, che andrà a ricalcare le informazioni presenti nella scheda di rilevazione fondi a cui la Commissione nazionale Biblioteche speciali, archivi e biblioteche d’autore AIB sta lavorando e che sarà resa disponibile ai colleghi bibliotecari nel corso del 2025. In questo modo le due attività e la loro comunicazione potrebbero camminare parallelamente riuscendo ad intercettare più professionisti delle istituzioni GLAM sia nell’uso della scheda rilevazione fondo personale all’interno della propria istituzione, sia nella corrispondente creazione di un item in Wikidata relativo al fondo personale a cui si sta lavorando. Gli scopi del progetto saranno: l'inserimento, arricchimento e valorizzazione di dati sui fondi personali in Wikidata; l'implementazione e il mantenimento di ontologie e thesauri multilingue relativi alla descrizione dei fondi personali; l'interconnessione tra i cataloghi delle collezioni e Wikidata; l'inclusione dei dati in Wikipedia e nei suoi progetti gemelli.
[ "fondi personali", "special collections", "biblioteche d'autore", "wikidata" ]
https://openreview.net/pdf?id=jXtR2U51ce
https://openreview.net/forum?id=jXtR2U51ce
34o7dEzgZ7
official_review
1,736,593,727,461
jXtR2U51ce
[ "everyone" ]
[ "~Rossana_Morriello1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Proposta chiara e rilevante review: La proposta è coerente con il tema del convegno, vengono presentati con chiarezza gli obiettivi e la metodologia. Si propone di illustrare un progetto già avviato e un data model che verrà reso disponibile nel 2025, dunque effettivamente utile per i bibliotecari universitari e della ricerca che spesso si trovano a dover trattare fondi personali e d'autore. compliance: 5 scientific_quality: 5 originality: 4 impact: 5 confidence: 3
j71riDLzjg
Whose History We Keep: Benchmarking Wikidata's Record of the Past's Protagonists
[ "Lennart Finke", "Sarah Sophie Pohl", "Janek Große", "Luca Lukacevic", "Stefan Haas" ]
Historical science compounds millennia of records of human activity, which have become available in digital, searchable form. Focusing on the largest such dataset—Wikidata—we attempt to find quantitative answers to the questions: Whose periods and which places' people do we know most about? What interests us about them? Who are we forgetting? What trends underlie the number of people registered and written about over time? We examine the individuals who have ended up in these datasets in relation to their readership, thereby shedding light not only on history itself but also on the practice of writing it. We begin with our finding that not only the number of people registered in Wikidata, but surprisingly also the fraction of the world population that is registered, follows an exponential increase over time. This is further amplified by accelerations in certain critical periods (such as around 600 BCE, 100 CE, 1500 CE, and 1740 CE), resulting in superexponential growth. In contrast, we analyze the precision of recorded birth dates and find a linear increase in the availability of birth dates precise to the decade and year. Here, we also observe a sudden increase in precision around 1500 CE, possibly due to the introduction of the printing press. Curiously, we find a statistically significant overrepresentation of certain birth months but no such effect for weekdays. The spatio-temporal analysis, based on an annotation by Laouenan et al., poignantly shows the shift of cultural centers over time, from the Middle East and China to Central Europe and later to North America. We point out that large sections of human history are staggeringly absent from the dataset; for instance, the Mughal Empire and premodern China after the Three Kingdoms Era have few representatives, despite constituting a significant portion of the world population of their time. We then turn our attention to the relationship between Wiki editors, readers, and the people they describe. First, we quantify a highly significant effect of the spoken language of associated Wikipedia entries and the country of origin of the person written about, which somewhat extends to geographically proximal countries as well. While the number of Wikidata entries per person over time follows a monotonous, quasi-exponential growth, the article reads per person over time are not monotonous at all, with, for instance, people from 500 BCE receiving more reads than those from 1400 CE, despite likely being fewer in number. A clear exponential growth in readership only starts around 1750 CE. Further, women and non-binary individuals, conditional on having an article about them, receive consistently more reads than men. Finally, we reflect on future developments of these metrics. Projecting our data forward, we expect that more than 1 in 1,000 people born today will receive a Wikidata entry (for comparison, more than 1 in 5,000 people have a Wikidata entry today). We hope these representatives will reflect what we care about, and we look forward to an era where societal-scale historical science can study everyone who wants to be studied.
[ "Digital History", "Methodology of History", "Open Data", "Wikidata" ]
https://openreview.net/pdf?id=j71riDLzjg
https://openreview.net/forum?id=j71riDLzjg
YPJoghg4NQ
official_review
1,736,308,665,518
j71riDLzjg
[ "everyone" ]
[ "~Annick_Farina1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: statistical analisi of historical records on Wikidata review: The authors examine the representation of history as seen in the Wikidata records from a statistical point of view in order to analyse the choices made by the editors in a critical way. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 5
j71riDLzjg
Whose History We Keep: Benchmarking Wikidata's Record of the Past's Protagonists
[ "Lennart Finke", "Sarah Sophie Pohl", "Janek Große", "Luca Lukacevic", "Stefan Haas" ]
Historical science compounds millennia of records of human activity, which have become available in digital, searchable form. Focusing on the largest such dataset—Wikidata—we attempt to find quantitative answers to the questions: Whose periods and which places' people do we know most about? What interests us about them? Who are we forgetting? What trends underlie the number of people registered and written about over time? We examine the individuals who have ended up in these datasets in relation to their readership, thereby shedding light not only on history itself but also on the practice of writing it. We begin with our finding that not only the number of people registered in Wikidata, but surprisingly also the fraction of the world population that is registered, follows an exponential increase over time. This is further amplified by accelerations in certain critical periods (such as around 600 BCE, 100 CE, 1500 CE, and 1740 CE), resulting in superexponential growth. In contrast, we analyze the precision of recorded birth dates and find a linear increase in the availability of birth dates precise to the decade and year. Here, we also observe a sudden increase in precision around 1500 CE, possibly due to the introduction of the printing press. Curiously, we find a statistically significant overrepresentation of certain birth months but no such effect for weekdays. The spatio-temporal analysis, based on an annotation by Laouenan et al., poignantly shows the shift of cultural centers over time, from the Middle East and China to Central Europe and later to North America. We point out that large sections of human history are staggeringly absent from the dataset; for instance, the Mughal Empire and premodern China after the Three Kingdoms Era have few representatives, despite constituting a significant portion of the world population of their time. We then turn our attention to the relationship between Wiki editors, readers, and the people they describe. First, we quantify a highly significant effect of the spoken language of associated Wikipedia entries and the country of origin of the person written about, which somewhat extends to geographically proximal countries as well. While the number of Wikidata entries per person over time follows a monotonous, quasi-exponential growth, the article reads per person over time are not monotonous at all, with, for instance, people from 500 BCE receiving more reads than those from 1400 CE, despite likely being fewer in number. A clear exponential growth in readership only starts around 1750 CE. Further, women and non-binary individuals, conditional on having an article about them, receive consistently more reads than men. Finally, we reflect on future developments of these metrics. Projecting our data forward, we expect that more than 1 in 1,000 people born today will receive a Wikidata entry (for comparison, more than 1 in 5,000 people have a Wikidata entry today). We hope these representatives will reflect what we care about, and we look forward to an era where societal-scale historical science can study everyone who wants to be studied.
[ "Digital History", "Methodology of History", "Open Data", "Wikidata" ]
https://openreview.net/pdf?id=j71riDLzjg
https://openreview.net/forum?id=j71riDLzjg
VlM9fTX0NX
official_review
1,735,748,261,348
j71riDLzjg
[ "everyone" ]
[ "~Franco_Bagnoli1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: An extremely interesting metric analysis of wikidata representations in history review: Authors present a statistical analysis of data contained in Wikipedia about historical periods and locations. The summary provided indicate that this contribution can be extremely important both in guiding wikipedia readers about over- (and under-) representations of certain themes, and also stimulate contributors in investing more efforts in "neglected" topics. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 3
hjkVICSOMy
Behind the Edits: Exploring Human-Bot Collaboration in Wikipedia
[ "Max Wang", "Rosalia McLaughlin", "Jeanna Matthews" ]
Content in Wikipedia is added by both human volunteers and automated bots. Along the spectrum from purely human edits to purely automated edits, humans also use an array of automated tools that could be classified as light automation (e.g. spell checking) to heavier automation (e.g. content translation from one language to another) to generative AI tools with light human review. Unlike many other corpora of data, Wikipedia keeps detailed metadata about the source of edits (human or bot) and the tools used in the editing process. This offers a unique opportunity to explore how automation impacts content quality, breadth, and update frequency. Moreover, such categorization empowers researchers to make informed, data-driven decisions about the appropriate balance between bot and human involvement when using the resulting data for various purposes. Besides the high level classification of human or bot, Wikipedia tracks a wide variety of special tags that also shed light on the range of automation even for edits tagged as human edits. In this study, we categorize the special tags added in Wikipedia according to which tags indicate information about the level of automation-created content versus human-created content and which do not. We started off with a list of over 300 tags, each representing a unique type of edit or action logged in the metadata of a Wikipedia page. These tags ranged from general metadata indicators to more specific labels highlighting the use of tools, automated processes, or manual interventions. We first identify over 50 tags that we classify as relevant for placing edits to Wikipedia content on a spectrum from mostly human to mostly automated. Our categorization process was guided by several criteria, including whether a tag explicitly indicated the use of automation (e.g., tags associated with bots like "IABot" or “AWB") or manual edits requiring human oversight (e.g., "Manual revert" or "DiscussionTools"). Second, we classified tags on a scale of 1-5 based on the degree of automation implied by the tag. Special tags in Wikipedia are added voluntarily and in that sense tracking is not perfect, but they still represent some of the best attempts at tracking the process by which content is created. Unlike many corpora which do not invest at all in such detailed tracking of the creation process, Wikipedia’s metadata allows researchers to include or exclude certain types of content based on how it was created and depending on the goals of the downstream task. For some tasks, automated or bot-created content could be perfectly useful where for other tasks including bot-generated content could degrade the quality of any models or articles produced. Our research aims to establish a clearer understanding of how content in Wikipedia is generated and then to assess its suitability for a variety of test cases.
[ "Wikipedia", "automation-assisted corpora generation", "tags", "metadata", "LLMs", "NLP", "AI", "edit tools" ]
https://openreview.net/pdf?id=hjkVICSOMy
https://openreview.net/forum?id=hjkVICSOMy
n3v5gitPUV
official_review
1,736,015,498,279
hjkVICSOMy
[ "everyone" ]
[ "~Luca_Martinelli1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Approvo review: Nonostante mi possa basare solo sull'abstract della proposta, lo studio sembra un'analisi interessante sulla quantità e qualità delle modifiche fatte su Wikidata. In particolare, è interessante valutare l'impatto degli edit automatizzati attraverso i vari sistemi di modifica automatizzata, o semi-automatizzata, autorizzati su Wikidata, così come l'impatto della revisione umana sugli edit fatti. Lo trovo pienamente coerente con lo scopo della conferenza e sicuramente un arricchimento della stessa. compliance: 5 scientific_quality: 5 originality: 5 impact: 4 confidence: 5
f0RnNTpqL0
Wikidata as a Backend for Research MediaWikis: A Case Study from the P-CITIZENS Project in documenting Amateur Theatre
[ "Ioanna Papazoglou", "Meike Wagner" ]
The ERC-funded project P-CITIZENS - Performing Citizenship explores the social and political roles of amateur theatre in Europe between 1780 and 1850. To support this research, the project has developed the Amateur Theatre Wiki, a platform dedicated to documenting historical and contemporary amateur theater groups. By adopting a WikiFAIR approach, the project implements an efficient, low-overhead solution that uses Wikidata as the structured data backend for the Wiki. The Wiki hosts textual and media content written about and by amateur theatre groups, while Wikidata functions as the repository for structured data, such as geographic locations, timelines, membership details, and affiliations. Leveraging Wikidata’s interconnected nature, the project integrates this data into additional knowledge networks, enriching the broader cultural heritage landscape and enabling extensive data reuse. A innovative technical feature of the project is the automated rendering of Wikidata information into local MediaWiki pages via Infobox templates. This ensures seamless data presentation for end-users while maintaining a centralized external dataset. The separation of content (text and media hosted in the Wiki) from data (stored in Wikidata) enhances reusability, interoperability, and collaborative potential. The paper will explore the following themes: Strategies for Data Management: Techniques for scraping, importing, organizing, and curating amateur theatre groups and actors in Wikidata using tools like OpenRefine and BeautifulSoup. Copyright and Community engagement: The process of determining the correct hosting location for media files, and notability criteria for Wikidata Impact and Accessibility: How Wikidata integration enhances reusability, research opportunities, and public engagement with the dataset. This case study focuses on the potential synergy between Wikidata and digital humanities research, showcasing how open data platforms can support academic and cultural initiatives. It offers a replicable model for leveraging Wikidata to lower the complexities of hosting and maintaining a structured dataset, while promoting the dissemination of cultural knowledge.
[ "Wikidata", "Amateur Theatre", "WikiFAIR", "MediaWiki", "Structured Data" ]
https://openreview.net/pdf?id=f0RnNTpqL0
https://openreview.net/forum?id=f0RnNTpqL0
yP9Ragw9JZ
official_review
1,736,524,007,949
f0RnNTpqL0
[ "everyone" ]
[ "~Silvia_Bruni1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Wikidata for Documenting Historical Amateur Theatre review: The project is interesting because it focuses on the history of a broad constellation of institutions and individuals connected to amateur theatre, which has been documented in a fragmented and not easily accessible manner. The connection with a European project (ERC-funded project P-CITIZENS - Performing Citizenship explores the social and political roles of amateur theatre in Europe between 1780 and 1850) demonstrates the versatility of Wikidata and its potential uses. The infobox, designed to make the information more easily readable, will enhance the usability of the data. The project lends itself to open science activities and could serve as a replicable model. The Amateur Theatre Wiki platform is still empty at the time of review. Examples can be viewed. compliance: 4 scientific_quality: 4 originality: 5 impact: 4 confidence: 4
f0RnNTpqL0
Wikidata as a Backend for Research MediaWikis: A Case Study from the P-CITIZENS Project in documenting Amateur Theatre
[ "Ioanna Papazoglou", "Meike Wagner" ]
The ERC-funded project P-CITIZENS - Performing Citizenship explores the social and political roles of amateur theatre in Europe between 1780 and 1850. To support this research, the project has developed the Amateur Theatre Wiki, a platform dedicated to documenting historical and contemporary amateur theater groups. By adopting a WikiFAIR approach, the project implements an efficient, low-overhead solution that uses Wikidata as the structured data backend for the Wiki. The Wiki hosts textual and media content written about and by amateur theatre groups, while Wikidata functions as the repository for structured data, such as geographic locations, timelines, membership details, and affiliations. Leveraging Wikidata’s interconnected nature, the project integrates this data into additional knowledge networks, enriching the broader cultural heritage landscape and enabling extensive data reuse. A innovative technical feature of the project is the automated rendering of Wikidata information into local MediaWiki pages via Infobox templates. This ensures seamless data presentation for end-users while maintaining a centralized external dataset. The separation of content (text and media hosted in the Wiki) from data (stored in Wikidata) enhances reusability, interoperability, and collaborative potential. The paper will explore the following themes: Strategies for Data Management: Techniques for scraping, importing, organizing, and curating amateur theatre groups and actors in Wikidata using tools like OpenRefine and BeautifulSoup. Copyright and Community engagement: The process of determining the correct hosting location for media files, and notability criteria for Wikidata Impact and Accessibility: How Wikidata integration enhances reusability, research opportunities, and public engagement with the dataset. This case study focuses on the potential synergy between Wikidata and digital humanities research, showcasing how open data platforms can support academic and cultural initiatives. It offers a replicable model for leveraging Wikidata to lower the complexities of hosting and maintaining a structured dataset, while promoting the dissemination of cultural knowledge.
[ "Wikidata", "Amateur Theatre", "WikiFAIR", "MediaWiki", "Structured Data" ]
https://openreview.net/pdf?id=f0RnNTpqL0
https://openreview.net/forum?id=f0RnNTpqL0
BJadoGJF8I
official_review
1,736,594,642,040
f0RnNTpqL0
[ "everyone" ]
[ "~Rossana_Morriello1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: An extremely important avenue for Wikidata and research review: The project is very interesting and useful for studies on theatre and performing arts. Its relevance is confirmed by being an ERC project. The use of Wikidata to support national and international research projects, as in this case, is an extremely important avenue for Wikidata, in line with the theme of collaboration with the academic research community proposed by the conference. compliance: 5 scientific_quality: 4 originality: 5 impact: 5 confidence: 3
dgGa5wjATX
Middle Aramaic epigraphy in Wikidata. Case of inscriptional material from Dura-Europos.
[ "Aleksandra Kubiak-Schneider" ]
As a consultant for the inscriptional material in various dialects of Aramaic for the IDEA (International Digital Dura-Europos Archive) project at Bard College and Yale University, I realised how the Aramaic epigraphy is underrepresented in the digital databases. There are a lot of databases which gather Ancient Greek (e.g. packhum, Trismegistos) and Latin (e.g. EDH), but the easily accessible online database which would be focused only on the Aramaic evidence is missing. The works for the IDEA project has on focus, among others, to collect the entire epigraphic evidence in the Linked Open Data datasets and create the Wikidata entries. It is an excellent tool for providing all the editions of the inscriptions with different readings and translations to present the complexity of the field. My paper highlights the role of Wikidata for making the innovative inscriptional corpus, accessible for the broad community. Furthermore, it reflects new perspectives for the studies of the Aramaic inscriptions from the time period between 300 BCE and 300 CE.
[ "epigraphy", "digital humanities", "databases", "ancient languages", "linked open data" ]
https://openreview.net/pdf?id=dgGa5wjATX
https://openreview.net/forum?id=dgGa5wjATX
eqkOCdI7QV
official_review
1,736,493,419,376
dgGa5wjATX
[ "everyone" ]
[ "~Monica_Berti1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: A project on Aramaic inscriptions still underrepresented in Wikidata review: The author aims to use Wikidata to add entries to an under-represented corpus consisting of Aramaic inscriptions. The project is innovative and interesting because it can contribute to adding new data about a historical language (Aramaic) and a field (epigraphy) that are not yet adequately represented in Wikidata. compliance: 5 scientific_quality: 5 originality: 5 impact: 4 confidence: 4
dgGa5wjATX
Middle Aramaic epigraphy in Wikidata. Case of inscriptional material from Dura-Europos.
[ "Aleksandra Kubiak-Schneider" ]
As a consultant for the inscriptional material in various dialects of Aramaic for the IDEA (International Digital Dura-Europos Archive) project at Bard College and Yale University, I realised how the Aramaic epigraphy is underrepresented in the digital databases. There are a lot of databases which gather Ancient Greek (e.g. packhum, Trismegistos) and Latin (e.g. EDH), but the easily accessible online database which would be focused only on the Aramaic evidence is missing. The works for the IDEA project has on focus, among others, to collect the entire epigraphic evidence in the Linked Open Data datasets and create the Wikidata entries. It is an excellent tool for providing all the editions of the inscriptions with different readings and translations to present the complexity of the field. My paper highlights the role of Wikidata for making the innovative inscriptional corpus, accessible for the broad community. Furthermore, it reflects new perspectives for the studies of the Aramaic inscriptions from the time period between 300 BCE and 300 CE.
[ "epigraphy", "digital humanities", "databases", "ancient languages", "linked open data" ]
https://openreview.net/pdf?id=dgGa5wjATX
https://openreview.net/forum?id=dgGa5wjATX
NIpPvz4Aih
official_review
1,735,932,573,627
dgGa5wjATX
[ "everyone" ]
[ "~Camillo_Carlo_Pellizzari_di_San_Girolamo1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: An interesting project about an underrepresented field of study review: The author highlights a relevant problem in the field of Aramaic studies, the absence of a comprehensive online database of inscriptions. The idea of using Wikidata to make this corpus more accessible is very innovative, also considering that in Wikidata the presence of both epigraphic corpora and Aramaic heritage is still sparse. compliance: 5 scientific_quality: 5 originality: 5 impact: 4 confidence: 4
bZCXeDzPU6
A Wikibooks-Wikidata Integration for Crowdsourced Curatorship at the Museu Paulista in Brazil
[ "João Peschanski", "Solange Ferraz de Lima" ]
The proposal is to describe and analyse an initiative for the digital dissemination of the Museu Paulista collection at USP, based on the reuse of data available on WIkidata for the creation of a Wikibook in the Portuguese version of the Open Book platform, where works are written collaboratively and published under a free licence. The Museu Paulista was founded in 1894 and is the oldest public museum in the state of São Paulo. In 1963, the museum was integrated into the structure of the University of São Paulo. Between 2013 and 2022, the historic building that houses the museum was closed to visitors because its facilities had to be renovated. During this time, the museum team expanded its digital activities. The Paulista Museum had already had a digital catalogue of its collections since 1993 and in 2017 entered into a partnership with WikiMovimento Brasil, which led to the creation of the USP Paulista Museum GLAM page, where more than 30,000 objects from the museum's collection are now available. Based on the database made available on Wikidata and with the support of the Banco do Brasil Foundation (2020-2022), the museum's digital strategies have developed activities to reuse the data, such as marathons and competitions to edit and create entries for Wikipedia and the production of three wiki books - Marcas nas fotografias de Werner Haberkorn, As fotografias de Guilherme Gaensly no acervo do Museu Paulista and Audiodescrição de obras do Museu do Ipiranga. The Wikibook we have selected for this analysis is Guilherme Gaensly Photographs in Museu Paulista Collection -https://pt.wikibooks.org/wiki/As_fotografias_de_Guilherme_Gaensly_no_acervo_do_Museu_Paulista. The wikibook The photographs of Guilherme Gaensly in the collection of the Museu Paulista was launched in the live edition marathon “São Paulo Photographic” - Museu Paulista da USP’ on YouTube on 8 May 2020. The Wikibook features a set of 140 photographs and postcards by Gaensly, especially urban landscapes of São Paulo and portraits taken in the studio, and encourages readers to collaborate in the description using digital tools. In the technical infrastructure of the wikibook about Gaensly, each page was created by just one person, but the information on it comes from edits made either directly on Wikidata or in applications that indirectly generate edits on Wikidata. The page created initially contains an image of Gaensly and its title, as well as predefined text and data sheets that either have gaps to be filled in according to the specifics of each image or are generated as information is entered into Wikidata. The page also contains a section on participatory curatorship, a section on connections with other images and two sections in the footer with contextual references to Gaensly's work and the Paulista Museum's GLAM-Wiki. Keywords: Wikibooks; GLAM; Digital dissemination; Educational dissemination
[ "Wikibooks; GLAM; Digital dissemination; Educational dissemination" ]
https://openreview.net/pdf?id=bZCXeDzPU6
https://openreview.net/forum?id=bZCXeDzPU6
rjEHME3xgK
official_review
1,736,694,952,204
bZCXeDzPU6
[ "everyone" ]
[ "~Iolanda_Pensa1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Inviting review: The case study the authors propose focuses on how contributors can relate and visualise the content they add to Wikidata. The use of a WikiBook seems a relevant method to allow people to rely on something concrete (very similar to the physical catalogue of a collection). Each page of the book invites people to improve content with links to Wikidata tools: "Participate in the curatorship of this image: Add the amount of each descriptor of this work to the Ipiranga Museum Wiki: How many have? Add descriptors to the item of this work on Wikidata using Wikidata Art Depiction Explorer Add the coordinates of the descriptors to the image by Wikidata Image Positions Add descriptors to this image on Wikimedia Commons through the ISA Add other metadata to the item of this work by Tabernacle I think it is a cool method to encourage people to contribute. The abstract states that "each page was created by just one person, but the information on it comes from edits made either directly on Wikidata or in applications that indirectly generate edits on Wikidata". It would be very interesting to know more about the people who contributed to this publication and how the method worked, if it was simple for people to participate and how their feedback was. The collaboration between the Museu Paulista / Museu do Ipiranga and the Wikimedia projects is significant and important. The collaboration is at different levels and with a series of different initiatives: uploads, GLAM pages, content on Wikidata. It is a little confusing in the abstract because the GLAM collaboration seems the key element of the presentation but the method, the use of a Wikibook to invite people to contribute to Wikidata and improve data related to a GLAM is much more relevant for out conference. [It took me 5 minutes to understand what USP (University of São Paulo) is. Maybe next time introduce the acronym at the beginning ;-)] compliance: 5 scientific_quality: 3 originality: 4 impact: 4 confidence: 4
aRDufrGP3m
Enhancement and sustainable fruition of Cultural Heritage in the era of ecological transition: open data and citizen science
[ "Alessia Minnella" ]
The doctoral research project is developed in collaboration with Wikimedia Italia and focuses on Cultural Heritage in the Italian context. The project investigates the correlation between Cultural Heritage preservation and the ecological transition, emphasizing open knowledge, sustainability, and citizen science. Climate change poses significant risks to Cultural Heritage, including accelerated degradation of materials in both indoor and outdoor environments. The research adopts an interdisciplinary approach to analyze these phenomena through the development of three case studies representing different display environments and contexts: a Green museum, a museum storage room, and an outdoor heritage site. Particularly, these studies focus on monitoring environmental parameters (both atmospheric and pollution ones), assessing material conservation, and developing sustainable strategies for heritage management. Wikimedia communities play a pivotal role in ensuring the accessibility and dissemination of research findings through Open Access, Open Data, and free licensing practices. By integrating citizen science into the project, the initiative empowers communities to actively participate in data collection and analysis, fostering a broader public understanding of the impact of climate change on cultural heritage. An in-depth analysis of the museums and the cultural institutions on Wikidata highlights the centrality of this platform as a tool for collecting, organising and accessing cultural data. The project overview emphasises how museums, with their already available data and those in the process of being opened, can be mapped, enriched and interconnected through Wikidata. This approach enables the selection of new relevant data and the updating of an open and accessible information network, fostering the valorisation of cultural heritage. Indeed, a key component of the project deals with organizing Wikimedia events in order to involve citizens in the research process. These events aim to raise awareness, share best practices in sustainable heritage conservation, and democratize access to scientific knowledge. The activities that are going to be planned are known as “wikiexcursions” and “editathons”, and they are going to be focused on updating Wikidata and Wikimedia Commons through specific themes focused on cultural heritage and preventive conservation. Within the editathons, it is also possible to plan activities aimed at updating specific points or interesting areas of OpenStreetMap, through the correct geolocation of the outdoor cultural heritage goods under study. By combining cultural heritage preservation, open knowledge dissemination, and community involvement, the project highlights the transformative potential of Wikimedia-driven initiatives in addressing global challenges like climate change while reinforcing the relevance of cultural heritage in society’s ecological transition. In addition, as a result of the monitoring of all the environmental data acquired and the state of conservation of the works, it is possible to obtain the definition of guidelines and best-practices aimed at the preventive conservation of the works of art under study.
[ "Wikidata", "OpenStreetMap", "Cultural Heritage", "Conservation", "Environmental sustainability", "Open Data", "Citizen Science" ]
https://openreview.net/pdf?id=aRDufrGP3m
https://openreview.net/forum?id=aRDufrGP3m
t0ahZ8nj40
official_review
1,735,749,207,840
aRDufrGP3m
[ "everyone" ]
[ "~Franco_Bagnoli1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: An interesting collective initiative about Cultural Heritage preservation and climate change review: The author presents her PhD investigation program about Cultural Heritage preservation and the challenges presented by climate change. This proposal includes citizen participation and the exploitation of wikimedia data. I think that this lighting presentation can rise interest in participation, provide critical contributions and promote the diffusion of the planned initiatives. compliance: 5 scientific_quality: 4 originality: 5 impact: 5 confidence: 3
YzLzBkneKb
The Journal of Open Humanities Data. Bridging open data and Wikidata for the Humanities
[ "Andrea Farina", "Barbara McGillivray" ]
Wikidata serves as a critical tool for enriching and interconnecting datasets, enabling researchers to explore relationships across diverse domains (Farda-Sarbas and Müller-Birn 2019; Neubert 2017). It offers a centralized platform for integrating identifiers, metadata, and semantic links, and allows for the creation of interoperable and reusable datasets supporting advanced analysis and interdisciplinary research. At the Journal of Open Humanities Data (JOHD), we embrace the potential of Wikidata to amplify the impact of open data for research in the humanities. Through the publication of data papers (peer-reviewed articles describing datasets, their methodologies, and their reuse potential), JOHD ensures that humanities datasets are accessible and reusable. Our mission aligns with the principles of platforms like Wikidata, emphasizing transparency, accessibility, and collaboration to elevate the role of data in advancing scholarly work and public engagement (Wigdorowitz et al. 2024). This poster highlights the synergies between JOHD and Wikidata, focusing on how the journal’s principles of open access, reusability, and reproducibility complement Wikidata’s capabilities as a linked open data hub. This collaboration can enhance the value and impact of humanities research in the digital age with JOHD acting as a bridge to encourage humanities scholars to engage with Wikidata by providing guidance on integrating datasets into Wikidata. We present case studies of data papers published in JOHD, showing how they have used Wikidata for dataset creation. For instance, linking place names in historical newspapers to Wikidata (Coll Ardanuy et al. 2022) enhances cultural heritage accessibility. Multilingual cultural heritage information, such as historical Chinese kung fu masters, can be integrated with Wikidata into reusable and human-centered knowledge graphs (Hou and Yuan 2023). Further, Wikidata ensures the reusability and transparency of bibliographical data, supporting JOHD’s emphasis on reproducible research (Malínek et al. 2024). We also comment on published datasets that do not mention Wikidata but could potentially benefit from its integration to enhance interdisciplinarity (e.g., Farina 2023). Finally, we explore how datasets published in JOHD and integrated with Wikidata can enhance the visibility and discoverability of research (cf. McGillivray et al. 2022) by tracking dataset reuse and citation within the Wikidata ecosystem. References Coll Ardanuy, M., Beavan, D., Beelen, K., Hosseini, K., Lawrence, J., McDonough, K., Nanni, F., van Strien, D., & Wilson, D. C. S. (2022). A Dataset for Toponym Resolution in Nineteenth- Century English Newspapers. Journal of Open Humanities Data, 8(1), 3, pp. 1–7. DOI: https://doi.org/10.5334/johd.56 Farda-Sarbas, M., & Müller-Birn, C. (2019). Wikidata from a Research Perspective - A Systematic Mapping Study of Wikidata. ArXiv, abs/1908.11153. https://doi.org/10.48550/arXiv.1908.11153 Farina, A. (2023). Lost at Sea: A Dataset of 25+ SEA Words Morpho-Semantically Annotated in Ancient Greek and Latin. Journal of Open Humanities Data, 9: 24, pp. 1–7. DOI: https://doi.org/10.5334/johd.139 Hou, Y., & Yuan, L. (2023). Building a Knowledge Graph of Chinese Kung Fu Masters From Heterogeneous Bilingual Data. Journal of Open Humanities Data, 9: 27, pp. 1–12. DOI: https://doi.org/10.5334/johd.136 Malínek, V., Umerle, T., Gray, E., Heibi, I., Király, P., Klaes, C., Korytkowski, P., Lindemann, D., Moretti, A., Panušková, Ch., Péter, R., Tolonen, M., Tomczyńska, A., & Vimr, O. (2024). Open Bibliographical Data Workflows and the Multilinguality Challenge. Journal of Open Humanities Data, 10: 27, pp. 1–14. DOI: https://doi.org/10.5334/johd.190 McGillivray, B., Marongiu, P., Pedrazzini, N., Ribary, M., Wigdorowitz, M., & Zordan, E. (2022). Deep Impact: A Study on the Impact of Data Papers and Datasets in the Humanities and Social Sciences. Publications, 10(4), 39. https://doi.org/10.3390/publications10040039 Neubert, J. (2017). Wikidata as a Linking Hub for Knowledge Organization Systems? Integrating an Authority Mapping into Wikidata and Learning Lessons for KOS Mappings. NKOS@TPDL, 1–12. Wigdorowitz, M., Ribary, M., Farina, A., Lima, E., Borkowski, D., Marongiu, P., Sorensen, A. H., Timis, C., & McGillivray, B. (2024). It Takes a Village! Editorship, Advocacy, and Research in Running an Open Access Data Journal. Publications, 12(3), 24. https://doi.org/10.3390/publications12030024
[ "open data", "open access", "open humanities research", "data papers", "Journal of Open Humanities Data" ]
https://openreview.net/pdf?id=YzLzBkneKb
https://openreview.net/forum?id=YzLzBkneKb
lVZjaYie7I
official_review
1,735,832,239,028
YzLzBkneKb
[ "everyone" ]
[ "~Rossana_Morriello1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Perfecly focused on the theme and practical application review: The use of Wikidata in scholarly journals is one of the most interesting fields of development of the collaboration between Wikidata and the academic community - and particularly in the humanities - which is the main topic of the conference. The aim of advancing public engagement, as stated in the proposal, is also extremely interesting and not very much considered in the use of data and datasets and so it is a quite original approach, which I would suggest to highlight. The poster seems to be perfectly focused on the theme of the conference and presents a practical application of the use of Wikidata which could be a useful benchmarking opportunity for scholars and journal managers. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 5
YzLzBkneKb
The Journal of Open Humanities Data. Bridging open data and Wikidata for the Humanities
[ "Andrea Farina", "Barbara McGillivray" ]
Wikidata serves as a critical tool for enriching and interconnecting datasets, enabling researchers to explore relationships across diverse domains (Farda-Sarbas and Müller-Birn 2019; Neubert 2017). It offers a centralized platform for integrating identifiers, metadata, and semantic links, and allows for the creation of interoperable and reusable datasets supporting advanced analysis and interdisciplinary research. At the Journal of Open Humanities Data (JOHD), we embrace the potential of Wikidata to amplify the impact of open data for research in the humanities. Through the publication of data papers (peer-reviewed articles describing datasets, their methodologies, and their reuse potential), JOHD ensures that humanities datasets are accessible and reusable. Our mission aligns with the principles of platforms like Wikidata, emphasizing transparency, accessibility, and collaboration to elevate the role of data in advancing scholarly work and public engagement (Wigdorowitz et al. 2024). This poster highlights the synergies between JOHD and Wikidata, focusing on how the journal’s principles of open access, reusability, and reproducibility complement Wikidata’s capabilities as a linked open data hub. This collaboration can enhance the value and impact of humanities research in the digital age with JOHD acting as a bridge to encourage humanities scholars to engage with Wikidata by providing guidance on integrating datasets into Wikidata. We present case studies of data papers published in JOHD, showing how they have used Wikidata for dataset creation. For instance, linking place names in historical newspapers to Wikidata (Coll Ardanuy et al. 2022) enhances cultural heritage accessibility. Multilingual cultural heritage information, such as historical Chinese kung fu masters, can be integrated with Wikidata into reusable and human-centered knowledge graphs (Hou and Yuan 2023). Further, Wikidata ensures the reusability and transparency of bibliographical data, supporting JOHD’s emphasis on reproducible research (Malínek et al. 2024). We also comment on published datasets that do not mention Wikidata but could potentially benefit from its integration to enhance interdisciplinarity (e.g., Farina 2023). Finally, we explore how datasets published in JOHD and integrated with Wikidata can enhance the visibility and discoverability of research (cf. McGillivray et al. 2022) by tracking dataset reuse and citation within the Wikidata ecosystem. References Coll Ardanuy, M., Beavan, D., Beelen, K., Hosseini, K., Lawrence, J., McDonough, K., Nanni, F., van Strien, D., & Wilson, D. C. S. (2022). A Dataset for Toponym Resolution in Nineteenth- Century English Newspapers. Journal of Open Humanities Data, 8(1), 3, pp. 1–7. DOI: https://doi.org/10.5334/johd.56 Farda-Sarbas, M., & Müller-Birn, C. (2019). Wikidata from a Research Perspective - A Systematic Mapping Study of Wikidata. ArXiv, abs/1908.11153. https://doi.org/10.48550/arXiv.1908.11153 Farina, A. (2023). Lost at Sea: A Dataset of 25+ SEA Words Morpho-Semantically Annotated in Ancient Greek and Latin. Journal of Open Humanities Data, 9: 24, pp. 1–7. DOI: https://doi.org/10.5334/johd.139 Hou, Y., & Yuan, L. (2023). Building a Knowledge Graph of Chinese Kung Fu Masters From Heterogeneous Bilingual Data. Journal of Open Humanities Data, 9: 27, pp. 1–12. DOI: https://doi.org/10.5334/johd.136 Malínek, V., Umerle, T., Gray, E., Heibi, I., Király, P., Klaes, C., Korytkowski, P., Lindemann, D., Moretti, A., Panušková, Ch., Péter, R., Tolonen, M., Tomczyńska, A., & Vimr, O. (2024). Open Bibliographical Data Workflows and the Multilinguality Challenge. Journal of Open Humanities Data, 10: 27, pp. 1–14. DOI: https://doi.org/10.5334/johd.190 McGillivray, B., Marongiu, P., Pedrazzini, N., Ribary, M., Wigdorowitz, M., & Zordan, E. (2022). Deep Impact: A Study on the Impact of Data Papers and Datasets in the Humanities and Social Sciences. Publications, 10(4), 39. https://doi.org/10.3390/publications10040039 Neubert, J. (2017). Wikidata as a Linking Hub for Knowledge Organization Systems? Integrating an Authority Mapping into Wikidata and Learning Lessons for KOS Mappings. NKOS@TPDL, 1–12. Wigdorowitz, M., Ribary, M., Farina, A., Lima, E., Borkowski, D., Marongiu, P., Sorensen, A. H., Timis, C., & McGillivray, B. (2024). It Takes a Village! Editorship, Advocacy, and Research in Running an Open Access Data Journal. Publications, 12(3), 24. https://doi.org/10.3390/publications12030024
[ "open data", "open access", "open humanities research", "data papers", "Journal of Open Humanities Data" ]
https://openreview.net/pdf?id=YzLzBkneKb
https://openreview.net/forum?id=YzLzBkneKb
1gWffjKMKM
official_review
1,736,249,166,899
YzLzBkneKb
[ "everyone" ]
[ "~Alessandra_Boccone1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Poster su vari argomenti chiave della conferenza review: Il poster mette in evidenza come le policy e i contenuti della rivista JOHD siano allineate con le possibilità offerte da uno strumento come Wikidata, che permette di scoprire relazioni fra domini diversi, con un'attenzione specifica al mondo dell'open access relativamente alle riviste scientifiche di ambito umanistico. La presentazione tocca vari argomenti chiave della conferenza, come i metodi di gestione dei dati, le strategie di ricerca e la condivisione dei set di dati, di conseguenza si trova perfettamente in linea con ciò che è stato richiesto nel bando; anche la bibliografia a corredo è di qualità. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 5
XyOzBSxykA
Collecting and Detecting Ancient Greek Historians through Wikibase and Wikidata
[ "Leonardo D'Addario" ]
1. Polybius and the lost ancient Greek historians In this paper, I will use Wikibase and Wikidata as part of my PhD project on Ancient Greek History and Literature at Leipzig University. Most works of ancient Greek historians are lost and survive only through quotations by later sources. Classical scholars usually call these quotations “fragments.” My PhD project focuses on The Histories of Polybius (206–124 BCE ca.), as this work contains various references to earlier historiographers. It specifically explores the language that Polybius uses when citing other historians and aims to provide insight into their reuse and reception within The Histories. The ultimate goal is to clarify whether Polybius engaged with a specific canon of ancient historians during the composition of his work. This research is part of the MECANO project (https://mecano-dn.eu/), which investigates the dynamics of canonization of Greco-Roman texts by combining traditional approaches with new digital methods. 2. First step: Collecting structured data To achieve these goals, I am developing a Wikibase that systematically collects structured data on the quotations of the lost historiographers cited by Polybius. The database will include the original Greek text of the quotations, metadata about the quoted authors (e.g., name, provenance, period) and their works (e.g., title, number of books, content), as well as references to the relevant sections of The Histories (book, chapter, paragraph) and to the classification of the quotations in Jacoby’s Die Fragmente der Griechischen Historiker, the authoritative collection of fragments of the lost historiographers. Since the PhD project focuses on the citing language, every Wikibase instance will also highlight relevant linguistic elements (e.g., verbs of saying and writing, forms of the author’s name, variations in the title of works). 3. Second step: Detecting structured data Then, I will employ the Wikidata Query Service to detect the structured data. Specifically, I aim to create default queries that may prove interesting both for my PhD project and potential Wikibase users. Indeed, queries such as “quotations where Polybius uses the verb ἱστορέω” or “quotations where Polybius specifies the title of works” can help analysing the language and, thus, the citing practice (or ratio laudandi, as classical scholars usually call it) of Polybius. 4. Objectives The main objective is to create new datasets according to the principle of Linked Open Data and to make them available and reusable for research communities across different disciplines. Nowadays a key issue, especially in Digital Classics, is the lack of coherently structured data and metadata about ancient authors and texts. The Wikibase I am developing could therefore contribute not only to the Wikimedia community by integrating new open data, but also to Classical scholarship. Ultimately, I aim to show that new technologies are proving helpful in advancing even traditional approaches to the Humanities.
[ "digital classics", "ancient greek historiography", "polybius", "wikibase", "wikidata", "dataset", "linked open data", "reuse of data", "wikidata query service" ]
https://openreview.net/pdf?id=XyOzBSxykA
https://openreview.net/forum?id=XyOzBSxykA
v6EiW4puKr
official_review
1,736,243,870,713
XyOzBSxykA
[ "everyone" ]
[ "~Lucia_Sardo1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: revisione review: La proposta risulta ben strutturata, con una chiara esposizione della metodologia usata, degli step del progetto e dei possibili risultati soprattutto in termini di impatto per la comunità scientifica di riferimento. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 4
XyOzBSxykA
Collecting and Detecting Ancient Greek Historians through Wikibase and Wikidata
[ "Leonardo D'Addario" ]
1. Polybius and the lost ancient Greek historians In this paper, I will use Wikibase and Wikidata as part of my PhD project on Ancient Greek History and Literature at Leipzig University. Most works of ancient Greek historians are lost and survive only through quotations by later sources. Classical scholars usually call these quotations “fragments.” My PhD project focuses on The Histories of Polybius (206–124 BCE ca.), as this work contains various references to earlier historiographers. It specifically explores the language that Polybius uses when citing other historians and aims to provide insight into their reuse and reception within The Histories. The ultimate goal is to clarify whether Polybius engaged with a specific canon of ancient historians during the composition of his work. This research is part of the MECANO project (https://mecano-dn.eu/), which investigates the dynamics of canonization of Greco-Roman texts by combining traditional approaches with new digital methods. 2. First step: Collecting structured data To achieve these goals, I am developing a Wikibase that systematically collects structured data on the quotations of the lost historiographers cited by Polybius. The database will include the original Greek text of the quotations, metadata about the quoted authors (e.g., name, provenance, period) and their works (e.g., title, number of books, content), as well as references to the relevant sections of The Histories (book, chapter, paragraph) and to the classification of the quotations in Jacoby’s Die Fragmente der Griechischen Historiker, the authoritative collection of fragments of the lost historiographers. Since the PhD project focuses on the citing language, every Wikibase instance will also highlight relevant linguistic elements (e.g., verbs of saying and writing, forms of the author’s name, variations in the title of works). 3. Second step: Detecting structured data Then, I will employ the Wikidata Query Service to detect the structured data. Specifically, I aim to create default queries that may prove interesting both for my PhD project and potential Wikibase users. Indeed, queries such as “quotations where Polybius uses the verb ἱστορέω” or “quotations where Polybius specifies the title of works” can help analysing the language and, thus, the citing practice (or ratio laudandi, as classical scholars usually call it) of Polybius. 4. Objectives The main objective is to create new datasets according to the principle of Linked Open Data and to make them available and reusable for research communities across different disciplines. Nowadays a key issue, especially in Digital Classics, is the lack of coherently structured data and metadata about ancient authors and texts. The Wikibase I am developing could therefore contribute not only to the Wikimedia community by integrating new open data, but also to Classical scholarship. Ultimately, I aim to show that new technologies are proving helpful in advancing even traditional approaches to the Humanities.
[ "digital classics", "ancient greek historiography", "polybius", "wikibase", "wikidata", "dataset", "linked open data", "reuse of data", "wikidata query service" ]
https://openreview.net/pdf?id=XyOzBSxykA
https://openreview.net/forum?id=XyOzBSxykA
szpetriOmV
official_review
1,736,432,057,486
XyOzBSxykA
[ "everyone" ]
[ "~Carlo_Bianchini1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: LOD datasets for Polibius and lost ancient Greek historians review: The project aims to fill the present gap of lack of coherently structured data and metadata about ancient authors and their texts through both the creation and collection of structured data and the discover of relevant pattern within them. As the submission is focused on the presentation of an up-to-come project, I would suggest to put the proposal to the paper section, instead of the paper section. compliance: 5 scientific_quality: 4 originality: 5 impact: 4 confidence: 4
VbMXuhIySF
Searching for Unicorns: Finding Metadata for Scientific Articles
[ "Federica Viazzi" ]
[ITA] La lightning talk espone un progetto in fase di strutturazione e sviluppo nato dalla necessità di catalogare la produzione scientifica di un'azienda ospedaliera senza avere a disposizione un software bibliografico o accesso a IRIS. Si stanno mappando i campi del software in uso (pensato per gli studi clinici e adattato ai dati bibliografici), i campi Dublin Core di un IRIS e le entità di Wikidata, per strutturare uno strumento libero, gratuito, interoperabile e cooperativo per registrare non solo i metadati bibliografici ma anche tutte le altre informazioni amministrative, gestionali e bibliometriche relative alle pubblicazioni scientifiche. Oltre ai metadati bibliografici indispensabili per identificare univocamente un articolo scientifico sono infatti indispensabili dati accessori quali l’ambito disciplinare (informazione non soggetta a normalizzazione da un thesauro ma derivata dall’organizzazione della ricerca scientifica dell’ente) della pubblicazione, la struttura o le strutture d’appartenenza degli autori (informazione per la quale è necessario poter tenere traccia delle modifiche dell’organizzazione dell’ente e gli spostamenti del ricercatore). È necessario inoltre rendere questi dati interoperabili con quelli relativi alla rivista dove l’articolo è pubblicato (es: DOAJ Seal) e dei ricercatori (es: ORCID). L’obiettivo è quindi quello di avere uno strumento che soddisfi le esigenze di registrazione, monitoraggio e analisi della produzione scientifica, anche per poterla poi valorizzare e disseminare in ottica di Open Science. [EN] The lightning talk presents an ongoing project aimed at cataloging a hospital's scientific output without owning a bibliographic software or having access to IRIS. Our approach involves mapping the fields of the software currently in use (originally designed for clinical trials but adapted to bibliographic data), the Dublin Core fields of an IRIS, and relevant Wikidata entities. The aim is to develop a free, interoperable and cooperative tool for recording bibliographic metadata along with administrative data, management data and bibliometric information related to scientific publications. In addition, to the essential bibliographic metadata required to uniquely identify a scientific article, supplementary data are equally necessary. These include the disciplinary scope, which refers to information derived from the institution's research organization rather than being standardized through a thesaurus. Additionally, author affiliations that include details about the organizational units to which the authors belong, with the ability to track changes in the institution's structure and the movement of researchers over time. It is also crucial to ensure interoperability with data related to the journal in which the article is published (e.g., DOAJ Seal) and researcher information (e.g., ORCID). The goal is to develop a tool that meets the needs of recording, monitoring, and analyzing scientific output, enabling its enhancement and dissemination within an Open Science framework. [*] The presentation will be held in Italian
[ "Repository", "scientific production", "authority control", "bibliometry" ]
https://openreview.net/pdf?id=VbMXuhIySF
https://openreview.net/forum?id=VbMXuhIySF
AGEiIZAhQz
official_review
1,736,433,901,298
VbMXuhIySF
[ "everyone" ]
[ "~Carlo_Bianchini1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Creazione di un dataset di articoli scientifici review: La proposta di lightning talk verte sull'idea di creare un dataset relativo alle pubblicazioni e agli autori di una istituzione (nel caso specifico un'azienda ospedaliera), in assenza di un software bibliografico e di un institutional repository. I metadati sarebbero modellati secondo la struttura di IRIS e sarebbero arricchiti di altri dati provenienti da altre fonti (per esempio, ORCID per gli autori) ma anche Wikidata. Non è chiaro dove e come sarebbero strutturati,m registrati, e resi disponibili (se Wikidata fa parte della mappatura). Esiste una vasta letteratura nazionale e internazionale sui progetti di creazione di dataset di natura bibliografica su Wikidata. compliance: 4 scientific_quality: 3 originality: 3 impact: 4 confidence: 4
UxixyYsYSU
Authoritative Practices and Collective Validation: Wikidata within the Collaborative Digital Edition of the Greek Anthology
[ "Maxime Guénette", "Mathilde Verstraete", "Marcello Vitali-Rosati" ]
The management and preservation of research data in the Humanities increasingly raises questions about its sustainability, sharing, and validation. In this context, Wikidata constitutes a powerful and collaborative tool. By challenging traditional models where researchers act as both producers and gatekeepers of authority, Wikidata redefines these issues and fosters new paradigms of collaboration. This paper will explore these dynamics of collaboration and shifting authority through the case study of the collaborative digital edition of the *Greek Anthology* (the AG project, hosted at the Canada Research Chair on Digital Textualities since 2014), implemented on a collaborative platform (<https://anthologiagraeca.org/>) where everyone is invited to participate according to their own knowledge. Wikidata is used in many ways within the AG project. First, all keywords (place names, authors, metrical forms, literary genres, etc.) used to annotate the platform have a Wikidata identifier or is created accordingly. Indeed, when a user participates in the editing of the corpus and wishes to add a keyword to an epigram, if the keyword does not exist, he or she must create it on Wikidata and then link it to the platform. Second, Wikidata has been used in a more intensive way to address inconsistencies in our list of authors. Like Wikidata, our data model is multilingual. However, the gaps and inconsistencies in Wikidata ---such as missing authors, duplicate entries, and inconsistent information across languages--- were directly mirrored on our platform (<https://anthologiagraeca.org/authors/>). This alignment made it essential to tackle these issues systematically to ensure the accuracy of our data. We started by searching for the names of these authors in various languages (at least in French, English, Italian, Ancient Greek and Latin). We then uploaded this information to Wikidata, and subsequently fetched it back to integrate it into the AG platform. Almost immediately after our data dump on Wikidata, its community quickly reviewed and corrected it to align our contribution with Wikidata's standards and guidelines. This process means we not only retrieved our data but also benefited from the community's improvements. We are making a conscious strategic choice: rather than positioning ourselves as the sole custodian of authority, we are delegating that responsibility to a wider community. Our presentation invites reflection on the implications of this shift toward distributed authority. How can that shift in authority benefit academic research projects? Is Wikidata's epistemological paradigm coherent with ours? Can we think of a generic epistemological framework to be effectively applied to specific academic endeavors? Based on the experiments carried out and the choices made as part of the AG project, this presentation will provide practical and conceptual answers to the questions of (distributed) authority, validation and collaboration in the use of Wikidata, opening up prospects for other projects in the Humanities. We suggest that Wikidata is not merely a technical tool but rather a space where methodological and epistemological debates can unfold. By engaging with this dynamic, researchers can enhance their projects while contributing to the creation of a more sustainable, inclusive, and collaborative knowledge base.
[ "Greek Anthology", "authority", "collaboration", "digital philology" ]
https://openreview.net/pdf?id=UxixyYsYSU
https://openreview.net/forum?id=UxixyYsYSU
OVlL7VdXKy
official_review
1,735,975,010,664
UxixyYsYSU
[ "everyone" ]
[ "~Camillo_Carlo_Pellizzari_di_San_Girolamo1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: A reflection on the use of Wikidata in academic research projects review: The authors describe how Wikidata is already being used as a source for multilingual authority data and tags by the Greek Anthology project (which is itself a collaborative project) and, starting from this experience, builds up more general reflections about the implications of delegating the curation of data used in academic research projects to a wider community. These reflections can have a significant relevance in fostering the cooperation of other academic projects with Wikidata; this is particularly relevant in the field of digital humanities, where open access databases are still relatively rare but would be useful to open new research fields. One of the reasons that make it difficult to create and maintain such databases is the relevant amount of human and financial resources needed to curate them; the use of Wikidata, as shown here, should be considered among the ways to mitigate this problem. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 5
UxixyYsYSU
Authoritative Practices and Collective Validation: Wikidata within the Collaborative Digital Edition of the Greek Anthology
[ "Maxime Guénette", "Mathilde Verstraete", "Marcello Vitali-Rosati" ]
The management and preservation of research data in the Humanities increasingly raises questions about its sustainability, sharing, and validation. In this context, Wikidata constitutes a powerful and collaborative tool. By challenging traditional models where researchers act as both producers and gatekeepers of authority, Wikidata redefines these issues and fosters new paradigms of collaboration. This paper will explore these dynamics of collaboration and shifting authority through the case study of the collaborative digital edition of the *Greek Anthology* (the AG project, hosted at the Canada Research Chair on Digital Textualities since 2014), implemented on a collaborative platform (<https://anthologiagraeca.org/>) where everyone is invited to participate according to their own knowledge. Wikidata is used in many ways within the AG project. First, all keywords (place names, authors, metrical forms, literary genres, etc.) used to annotate the platform have a Wikidata identifier or is created accordingly. Indeed, when a user participates in the editing of the corpus and wishes to add a keyword to an epigram, if the keyword does not exist, he or she must create it on Wikidata and then link it to the platform. Second, Wikidata has been used in a more intensive way to address inconsistencies in our list of authors. Like Wikidata, our data model is multilingual. However, the gaps and inconsistencies in Wikidata ---such as missing authors, duplicate entries, and inconsistent information across languages--- were directly mirrored on our platform (<https://anthologiagraeca.org/authors/>). This alignment made it essential to tackle these issues systematically to ensure the accuracy of our data. We started by searching for the names of these authors in various languages (at least in French, English, Italian, Ancient Greek and Latin). We then uploaded this information to Wikidata, and subsequently fetched it back to integrate it into the AG platform. Almost immediately after our data dump on Wikidata, its community quickly reviewed and corrected it to align our contribution with Wikidata's standards and guidelines. This process means we not only retrieved our data but also benefited from the community's improvements. We are making a conscious strategic choice: rather than positioning ourselves as the sole custodian of authority, we are delegating that responsibility to a wider community. Our presentation invites reflection on the implications of this shift toward distributed authority. How can that shift in authority benefit academic research projects? Is Wikidata's epistemological paradigm coherent with ours? Can we think of a generic epistemological framework to be effectively applied to specific academic endeavors? Based on the experiments carried out and the choices made as part of the AG project, this presentation will provide practical and conceptual answers to the questions of (distributed) authority, validation and collaboration in the use of Wikidata, opening up prospects for other projects in the Humanities. We suggest that Wikidata is not merely a technical tool but rather a space where methodological and epistemological debates can unfold. By engaging with this dynamic, researchers can enhance their projects while contributing to the creation of a more sustainable, inclusive, and collaborative knowledge base.
[ "Greek Anthology", "authority", "collaboration", "digital philology" ]
https://openreview.net/pdf?id=UxixyYsYSU
https://openreview.net/forum?id=UxixyYsYSU
6u9kNjL0DW
official_review
1,736,495,565,019
UxixyYsYSU
[ "everyone" ]
[ "~Monica_Berti1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: An important contribution on the critical use of Wikidata for collaborative projects in the humanities and philology review: The authors of this paper aim not only to present the use of Wikidata in the collaborative digital edition of the Greek Anthology (the AG project), but also to discuss important questions about the relationship between Wikidata and research. These reflections are the result of the long experience of the authors, who have chosen to contribute data from their project to Wikidata, not only to preserve it in a wider, collaborative and distributed environment, but also to make it an opportunity for methodological and epistemological debates. compliance: 5 scientific_quality: 5 originality: 5 impact: 5 confidence: 5
UaBDWHMWJa
Mapping UNIMARC to BIBFRAME: The SHARE Catalogue Knowledge Base on Wikibase.cloud
[ "Claudio Forziati", "Alessandra Moi" ]
(EN) SHARE Catalogue Mapping Knowledge Base[1] is a project, hosted on Wikibase.Cloud, that aims to represent the mapping between the UNIMARC bibliographic format and the BIBFRAME ontology. This mapping, carried out by the SHARE Catalogue technical team between 2022 and 2023, is primarily intended to enable SHARE Catalogue to adopt the Share Family's new LOD Platform[2], along with its advanced bibliographic entity representation and Linked Data Editor. In accordance with the SHARE family's principles of openness and interoperability, the technical team chose Wikibase technology to make the processed mapping accessible for widespread use in the professional and research community. Furthermore, the SHARE Catalogue team sought to experiment with a modeling of UNIMARC structured in statements, defining all the key elements of the format (tags, subfields, indicators, etc.), their sources (primarily UNIMARC 3rd edition updates), possibly their correspondence in external projects (e.g., iflastandards website), and their matching with BIBFRAME attributes. (IT) SHARE Catalogue Mapping Knowledge Base[1] è un progetto, ospitato su Wikibase.Cloud, che si propone di rappresentare la mappatura tra il formato bibliografico UNIMARC e l'ontologia BIBFRAME. Questa mappatura, realizzata dal gruppo tecnico di SHARE Catalogue tra il 2022 e il 2023, ha lo scopo principale di consentire a SHARE Catalogue di adottare la nuova piattaforma LOD della Share Family[2], con la sua rappresentazione avanzata delle entità bibliografiche e il Linked Data Editor. In conformità con i principi di apertura e interoperabilità della Share Family, il gruppo tecnico ha scelto Wikibase per rendere la mappatura disponibile per un uso diffuso nella comunità professionale e di ricerca. Inoltre, il team di SHARE Catalogue ha cercato di sperimentare una modellazione di UNIMARC strutturata in dichiarazioni, definendo tutti gli elementi chiave del formato (tag, sottocampi, indicatori, ecc.), le loro fonti (principalmente gli aggiornamenti della terza edizione di UNIMARC), eventualmente la loro corrispondenza in progetti esterni (ad esempio, il sito web iflastandards) e la loro corrispondenza con gli attributi BIBFRAME. [*] The presentation will be held in Italian [1] https://unimarc2bibframe.wikibase.cloud/ [2] https://www.share-family.org/#technology
[ "Wikibase", "UNIMARC", "BIBFRAME", "SHARE Catalogue" ]
https://openreview.net/pdf?id=UaBDWHMWJa
https://openreview.net/forum?id=UaBDWHMWJa
cOm5IJiL5b
official_review
1,736,014,132,012
UaBDWHMWJa
[ "everyone" ]
[ "~Luca_Martinelli1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Approvo review: Il progetto SHARE Catalogue è una bellissima realtà di collaborazione fra enti bibliotecari universitari del Sud Italia, portato avanti da ottimi bibliotecari che hanno la qualità dei dati e la condivisione di questi dati di qualità come obiettivi principali. Sarà interessante valutare come l'adozione di Wikibase permetterà un ulteriore salto di qualità del progetto. compliance: 5 scientific_quality: 5 originality: 4 impact: 5 confidence: 5
TR7v6Okb9q
Developing a creative model for Wikidata analysis in the GLAM sector
[ "Enrique Tabone" ]
In 2019, the author started developing a research project exploring the visualization of specific datasets from Wikidata for artistic practice at the University of Salford’s Digital Curation Lab. Initially, the research involved an analysis of gender representation in the University of Salford’s Art Collection through Wikidata. This led to the development of an inquiring model for application on other art or museum collections. Subsequently, this model was applied to datasets that include about 99 university art collections across the UK. During the period 2021 – 2023, a similar approach was adopted on Heritage Malta’s collection of prehistoric female figurines, held at two museums in Malta and Gozo. The project brought together the research work conducted over the previous years, towards a coherent conclusion. Structured on Wikidata, these datasets have been demonstrated through data visualizations, and a data sound art installation (data sonification) accompanied by physical art objects, created through the author’s artistic practice. In the process, reflections on data representations in art collections and/or museums – regardless of whether it is data visualization or data sonification – have provided opportunities to explore concrete ways to look into a collection (through data about it) rather than at a collection as a set of artefacts. This data science point of view aims to enable the discovery of relationships between items within the dataset while stitching them together through shared properties, including in creative ways.
[ "Wikidata", "digital curation data sonification", "data visualisation", "GLAM" ]
https://openreview.net/pdf?id=TR7v6Okb9q
https://openreview.net/forum?id=TR7v6Okb9q
aLd91hKuRQ
official_review
1,735,839,802,709
TR7v6Okb9q
[ "everyone" ]
[ "~Elena_Marangoni1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Creativity, replicability and innovation in the outputs review: The presentation proposes a method of data analysis that has been applied to datasets of different cultural institutions and can therefore be replicated for the benefit of other contexts and institutions. Furthermore, it focuses on the outputs, visual and audio (an innovative aspect) and the restitution and visualization of the results are very important to communicate and spread the benefits of the data analysis made possible by Wikidata. compliance: 5 scientific_quality: 4 originality: 5 impact: 5 confidence: 4
TR7v6Okb9q
Developing a creative model for Wikidata analysis in the GLAM sector
[ "Enrique Tabone" ]
In 2019, the author started developing a research project exploring the visualization of specific datasets from Wikidata for artistic practice at the University of Salford’s Digital Curation Lab. Initially, the research involved an analysis of gender representation in the University of Salford’s Art Collection through Wikidata. This led to the development of an inquiring model for application on other art or museum collections. Subsequently, this model was applied to datasets that include about 99 university art collections across the UK. During the period 2021 – 2023, a similar approach was adopted on Heritage Malta’s collection of prehistoric female figurines, held at two museums in Malta and Gozo. The project brought together the research work conducted over the previous years, towards a coherent conclusion. Structured on Wikidata, these datasets have been demonstrated through data visualizations, and a data sound art installation (data sonification) accompanied by physical art objects, created through the author’s artistic practice. In the process, reflections on data representations in art collections and/or museums – regardless of whether it is data visualization or data sonification – have provided opportunities to explore concrete ways to look into a collection (through data about it) rather than at a collection as a set of artefacts. This data science point of view aims to enable the discovery of relationships between items within the dataset while stitching them together through shared properties, including in creative ways.
[ "Wikidata", "digital curation data sonification", "data visualisation", "GLAM" ]
https://openreview.net/pdf?id=TR7v6Okb9q
https://openreview.net/forum?id=TR7v6Okb9q
SJGHbNvfTB
official_review
1,736,697,259,475
TR7v6Okb9q
[ "everyone" ]
[ "~Iolanda_Pensa1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Creative use of Wikidata and new understanding of GLAM review: We are used to being guided by museums, archives and libraries in exploring their collections. Even within digital strategies, cultural institutions tend to maintain control and a curatorial role in how we use and explore content. But what we are aiming for is when those collections finally achieve to become something new, outside of institutional control. I personally believe it is the meaning of GLAM: galleries, libraries, archives and museums become something new – GLAM – and they move outside their institutional frame. I think the work presented in this proposal goes in this direction. it relies on Wikidata to support creative ways to explore, interpret and use data. I think it is important to present this approach. I personally like also how it triggers the borders of collections and the role of institutions in defining how we access and interpret data. compliance: 4 scientific_quality: 4 originality: 4 impact: 4 confidence: 4
T2HJkuxsBw
Hunting for Lost Heritage on Wikimedia Commons and Wikidata
[ "Marco Chemello" ]
A lot of content has been produced with crowdsourcing by Wikimedia users, not only on Wikipedia. Hundreds of thousands of photographs of cultural heritage were uploaded in the last 20 years and are available on Wikimedia Commons, still not linked to structured data on Wikidata, often of unknown historical buildings, like ruined churches in Southern Italy. We will demostrate that is possible to use data mining techniques on Wikimedia projects and build on Wikidata a more comprehensive open catalogue of Cultural heritage in Italy from crowdsourced contents. We will present the result of phase 1 and 2 of this project, where OpenRefine was used to create thousands of new items on Wikidata about "lost heritage" in Italy, and discuss of the possible use of AI to speed-up the process.
[ "Wikimedia", "Wikimedia Commons", "Wikidata", "Heritage", "Churches", "architecture", "crowdsourcing", "AI", "cultural heritage", "architecture", "historical buildings", "historical heritage" ]
https://openreview.net/pdf?id=T2HJkuxsBw
https://openreview.net/forum?id=T2HJkuxsBw
HZ0bjQQkzE
official_review
1,736,243,102,988
T2HJkuxsBw
[ "everyone" ]
[ "~Lucia_Sardo1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: revisione review: La relazione presenta una proposta operativa di utilizzo di tecniche di data mining per arricchire Wikidata con dati già presenti in Wikimedia Commons. Pur non avendo aspetti fortemente innovativi rispetto alle tematiche trattate, si tratta di una proposta interessante per l'aggiunta di dati a Wikidata e per la valorizzazione di "beni culturali" al momento sotto rappresentati e poco conosciuti. L'autore è sicuramente un esperto del trattamento di queste tipologie di dati in Wikidata e il progetto di cui intende parlare è già avviato, motivo per cui si ritiene valida la proposta e di sicuro interesse, data anche la sua applicabilità ad altre tipologie di dati. compliance: 5 scientific_quality: 4 originality: 4 impact: 5 confidence: 5
T2HJkuxsBw
Hunting for Lost Heritage on Wikimedia Commons and Wikidata
[ "Marco Chemello" ]
A lot of content has been produced with crowdsourcing by Wikimedia users, not only on Wikipedia. Hundreds of thousands of photographs of cultural heritage were uploaded in the last 20 years and are available on Wikimedia Commons, still not linked to structured data on Wikidata, often of unknown historical buildings, like ruined churches in Southern Italy. We will demostrate that is possible to use data mining techniques on Wikimedia projects and build on Wikidata a more comprehensive open catalogue of Cultural heritage in Italy from crowdsourced contents. We will present the result of phase 1 and 2 of this project, where OpenRefine was used to create thousands of new items on Wikidata about "lost heritage" in Italy, and discuss of the possible use of AI to speed-up the process.
[ "Wikimedia", "Wikimedia Commons", "Wikidata", "Heritage", "Churches", "architecture", "crowdsourcing", "AI", "cultural heritage", "architecture", "historical buildings", "historical heritage" ]
https://openreview.net/pdf?id=T2HJkuxsBw
https://openreview.net/forum?id=T2HJkuxsBw
27TygXfrbf
official_review
1,736,321,993,314
T2HJkuxsBw
[ "everyone" ]
[ "~Carlo_Bianchini1" ]
wikimedia.it/Wikidata_and_Research/2025/Conference
2025
title: Review review: La presentazione descrive un progetto in tre fasi, delle quali la prima è stata realizzata e viene illustrata. Il lavoro è originale rispetto alla tipologia dei dati che prende in considerazione e dal punto di vista metodologico si avvale di tecniche e strumenti ben conosciuti, mettendo in evidenza le criticità nel loro utilizzo nel caso di studio. Sarebbe interessante che la presentazione fosse integrata con una breve descrizione relativa alla metodologia che il progetto prevede di utilizzare nelle fasi 2 e 3. compliance: 5 scientific_quality: 4 originality: 4 impact: 4 confidence: 4