_id
stringlengths 36
36
| text
stringlengths 200
328k
| label
stringclasses 5
values |
---|---|---|
a25cb031-1583-4dbe-806e-e950a4882ede
|
The linkage between the classic DUE paradigm and our proposed MARL paradigm is then demonstrated in Section . On a two-node two link network with CTM as the traffic propagation mechanism, we show that the numerical solution from the developed MARL algorithm agrees well with the solution from an analytical LP formulation in the DSO scenario and the solution from an iterative method in the DUE scenario. To our knowledge, this paper is the first-of-its-kind to unify the mode-based (i.e., DUE) and data-driven (i.e., MARL) paradigms for dynamic routing games.
|
d
|
a549f433-c645-4d4a-b285-47c07fc56010
|
We demonstrate the effect of two countermeasures, namely tolling (see Section ) and signal control (see Section ), on the behavior of travelers and show that the systematic objective of city planners can be optimized by a proper control. The results show that on the Braess network in the study, the optimal toll charge on link \(l_{12}\) is greater or equal to 25, with which the average travel time of selfish agents is minimized and the emergence of Braess paradox could be avoided. In the large-sized real-world road network, the optimal offset for signal control on Broadway is derived as 4 seconds, with which the average travel time of all controllable agents is minimized. The decomposition of travel time into waiting time (at intersections) and cruising time (on the link) further reveals that travelers could take advantage of potential “green wave" when the offset is relatively small (i.e., less than 10 seconds). We note that both the waiting time and cruising time are minimized at the optimal offset, i.e., 4 seconds.
|
d
|
9098532f-9f0f-42eb-afe5-e3d2b935c404
|
Nevertheless, there are several future directions we would like to explore. First, we assume that all travelers are perfectly rational in this study, meaning that travelers would always take the route with minimal expected travel time. However, bounded rationality [1]}, [2]} suggests travelers may not switch from their current route to an alternative route with less travel cost if the difference is not large enough. Second, in addition to the route choice, departure time choice is another important research direction. We believe the proposed model-free MARL paradigm is able to accommodate departure time choice of travelers. We leave bounded rationality and the simultaneous route and departure time choice in future research.
|
d
|
95271dfb-c790-4fb1-9373-a3647db91d79
|
Deep neural networks based methods have been producing state-of-the-art (SOTA) results for many problems in machine learning and computer vision. These methods require large amount of training and testing data to achieve the expected result. Although the model is trained with large datasets sometimes it will not generalize learned knowledge to new environment and datasets. This is because deep learning algorithms assume that training and testing data is drawn from independent and identical distributions (i.i.d.). However this assumption rarely holds true, as there will be a shift in data distributions across different domains this is explained in Fig. REF . This domain shift between source and target datasets will make deep neural networks produce wrong predictions on the target dataset. So training of a deep neural network with source and target datasets which reduces the domain shift between distribution of datasets is called as domain adaptation [1]}.
|
i
|
2e2203c8-c65a-4678-abf8-0da515b37094
|
There are different types of domain adaptation (DA) techniques, a few of them are Unsupervised DA [1]}[2]}, Semi Supervised DA[3]}, Weakly Supervised DA[4]}, One Shot DA[5]}, Few Shot DA[6]}, Zero Shot DA[7]}. In this report, we discussed on Unsupervised DA techniques. Unsupervised DA is called so because unlabeled target dataset is used during training to reduce the domain shift.
|
i
|
43c07e0d-6d18-46dc-a392-95758e58ee2e
|
In this report, techniques evaluated are selected from Adversarial and Distance-based methods. In Adversarial methods, Conditional adversarial domain adaptation (CDAN) and CDAN with Entropy conditioning (CDAN+E) [1]} are selected. In Distance-based methods, Deep domain confusion: Maximizing for domain invariance (DDC) [2]} and Deep coral: Correlation alignment for deep domain adaptation [3]} are selected. Application of domain adaptation techniques can be used in different types of domain shift explained in Fig. REF and application in dataset to dataset domain shift is selected for evaluation. In this report evaluation of different domain adaptation techniques are benchmarked on office-31 dataset [4]}.
<FIGURE>
|
i
|
95409a92-0d55-4331-8ea2-0291ac462e2e
|
In this section we describe the experiments performed with the different metods on different data domains. Our implementations are written in Pytorch and are located in the link to the repository given above. The experiments were carried out using Google Colab GPU support (Testla P4).
|
m
|
0b0e7200-6510-4e66-9d05-43f8c6593557
|
Based on the methods we evaluated, we can say there are three possible scenarios for accuracy performance. Note that in the graphs presented here, the accuracy increases when we add a domain adaptation loss (e.g. Amazon \(\,\rightarrow \,\) Webcam), however, we noticed that accuracy can decrease or stay the same for other domain shifts. Refer to the repository to see the other plots. Moreover, recognition accuracies don't go higher than 70% in most experiments, this means there is a lot of room for improvement. For future work we can carry out trainings for 200 epochs (not 100) and understand better the behavior of our losses, for instance why in some cases losses oscillate too much. Finally, we can conclude that we gained hands-on experience building end-to-end pipelines using deep neural networks for domain adaptation tasks.
|
d
|
307f2c24-6bea-48f7-a423-78a70c59faff
|
Few-shot learning can be defined as a type of machine learning which aims at gaining good learning performance with few (usually 20 or less) supervised training examples [1]}. It is an important area of machine learning which contributes to the advancement in AI in the aspect of learning humanly as humans easily learn from fewer examples. It also helps in learning rare cases which can be applied to fraud detection in electronic transactions. Few-shot learning can be applied to different application domains including computer vision, robotics, natural language processing, acoustic signal processing, drug discovery, and the like. Few-shot learning also reduces the data gathering effort and computational cost associated with big datasets which are very common issues in deep learning.
|
i
|
671c69a2-997e-4165-8a39-bbe53e4b333f
|
Various techniques are applied to address the problem of few-shot learning. In all the cases however, there is a way to exploit prior knowledge accumulated in the data, model or algorithm of any related machine learning task. The most common and effective way is through algorithm approach particularly meta-learning. In meta-learning an attempt is to improve performance of a new task by the meta-knowledge extracted across related tasks through a meta-learner [1]}. Hence, the formulation of tasks plays an important role in such problems. These tasks are also known as episodes having their own training and test sets. These training and test sets are also called support and query sets respectively in few-shot learning terminologies. Each task has the same number of classes (referred as ways) for the support and query sets. However, the number of examples per class in the support set only defines the shot. Hence, a 3 way 5 shot few-shot learning problem describes a task formulation with 3 classes and 5 examples per class in the support set.
|
i
|
82b78513-17db-45fd-ba8d-95eda9203413
|
Meta-learning also known as learning to learn is any type of learning based on prior learning experience with other tasks. The similarity of the tasks also plays an important role in a way that the more similar those tasks are, the more types of meta-data one can leverage, and defining task similarity remains the key challenge. Other types of learning including multi-task learning, transfer learning and ensemble learning can also be meaningfully combined with meta-learning systems. Hence, the scientific contributions in meta-learning speed up and improve the design of machine learning pipelines and also allow us to replace hand-engineered algorithms with novel approaches learned in a data-driven way [1]}. Likewise this study explores and opens a wide range of perspectives into examining task similarities and their related effects on specific applications of deep learning and few-shot learning.
|
i
|
33e0c63b-41d1-4f4b-b010-23378ed8a622
|
Amharic optical character recognition in general and the handwritten character recognition in particular is not a well studied area of research. The unavailability of standard public datasets make Amharic as one of the low resource languages. Even though there are limited attempts, most of these works focus on implementation of off-the-shelf inventions which are particularly designed for Latin scripts. This trend has created two interconnected problems. The first one is associated with the limitations to fit the real problem and proposed solution. Another problem arises from overlooking the opportunities that might have emerged with any possible innovations from the exploration of specific contexts which can then be scaled up to generalized solutions [1]}, [2]}, [3]}, [4]}.
|
i
|
c6a2b170-9b9b-4b17-a230-73cd4e27c186
|
In this study, offline handwritten Amharic character recognition is addressed using few-shot learning for the first time. Few-shot learning is a recent and promising area of research resolving the limitations of deep learning which requires huge amounts of labeled data. Accordingly, such techniques open a way to address low resource languages like Amharic. It is also suitable to address the issue of rarely occurring characters in real life documents. More importantly in this study, training episodes are examined from the context and nature of Amharic characters, which are the core issues in few-shot learning problems. Most of the studies in few-shot learning focus on understanding the problem itself and hence are based on common image datasets like Mini-ImageNet and Omniglot. However, in this study a more realistic application of few-shot learning is presented using prototypical networks which is a popular and simpler type of few-shot learning.
|
i
|
a54bbcba-973a-4a61-acd5-a31d633ea0ae
|
The challenges facing deep learning studies when it comes to low resource languages is not only unavailability of huge standard datasets but also the difficulty of training deep learning architectures considering the fact that they typically are made of way more parameters than the dataset contains [1]}. In regard to few-shot learning also, the datasets used to assess are not challenging and realistic as compared to the progress made in the techniques and models [2]}. Hence in this study, a suitable few-shot learning dataset is organized for Amharic characters using an Android App developed as a part of this study. Generally, the contributions of this paper can be summarized as follows:
|
i
|
2affcf64-d4c5-409f-9ef4-4ef1d73c5161
|
Organized a new few-shot learning dataset for Amharic handwritten characters with the appropriate split of train, validation, and test sets.
Implemented few-shot learning for Amharic handwritten characters recognition for the first time as a benchmark.
Empirically explored how training episodes affect the performance in few-shot learning with a novel contribution in the context of Amharic handwritten characters recognition.
|
i
|
e04f6d41-57ea-4d8f-999f-0b0c9e3c2426
|
Different recent papers have emerged to clarify the progress in few-shot learning [1]}, [2]}, [3]}, [4]}. These studies mainly address the problems associated with few-shot learning datasets and performance measures. Triantafillou et al. [1]} proposed META-DATASET: a new benchmark for training and evaluating few-shot learning models that is large-scale, consists of diverse datasets, and presents more realistic tasks. Dhillon et al. [2]} performed extensive studies on benchmark datasets to propose a metric that quantifies the hardness of a few-shot episode which can be used to report the performance of few-shot algorithms in a more systematic way.
|
w
|
e8aec118-9fee-444f-b635-55c21b751314
|
Wang et al. [1]} have made a rigorous review on few-shot learning problems to formally define and construct a good taxonomy of few-shot learning problems. Authors gave a formal definition of few-shot learning which connects to the general problem of machine learning by illustrating the unreliable empirical risk minimizer. That is the core issue in few-shot learning which arises due to fewer examples identified based on error decomposition in supervised machine learning. Therefore, few-shot learning techniques should find a way to use prior knowledge accumulated in data, model, and algorithm of other related tasks. Accordingly, Wang et al. [1]} classified few-shot learning methods by their focus on these three constituents into data (augment training dataset using prior knowledge), model (constrain hypothesis space by prior knowledge), and algorithm (alter search strategy in hypothesis space by prior knowledge).
|
w
|
c3c0c9f1-ae7e-4fd5-86dc-d094487ee4e3
|
Typical examples of algorithm few-shot learning are Model-Agnostic Meta-Learning (MAML) [1]} and its variants like Reptile [2]}. These methods learn parameter initialization that can be fine-tuned quickly for a new task. MAML learns initialization through effective gradient steps for a new task with a small amount of training data to produce good generalization. Reptile works this by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task [3]}, [1]}, [2]}. Another set of Model few-shot learning methods include siamese neural networks, matching networks, and prototypical networks which are task-invariant embedding learning models [3]}. These methods are also known as metric learning as they learn to classify new images based on their similarity to support images unlike gradient-based meta-learning which leverages gradient descent to learn commonalities among various tasks [7]}, [8]}, [9]}. Simple Neural Attentive Learner (SNAIL) is another embedding network with interleaved temporal convolution layers and attention layers which presents an alternative paradigm where a generic architecture has the capacity to learn an algorithm that exploits domain-specific task structure [3]}, [11]}. In this paper, both model and algorithm few-shot methods are exhibited due to the implementation of prototypical networks and incorporation of auxiliary task episodic training.
|
w
|
f19978e1-cfb2-4f6d-abe8-64621791d370
|
More related papers address few-shot learning from two main perspectives which are interrelated by nature. The first one is focusing on extraction of highly discriminative features which can easily generalize classes so that very few samples would be sufficient. Another direction is looking for different possible auxiliary or related tasks that can be used to complement the main few-shot classification task. This can be done by proposing creative classification tasks that can be trained in a multi-task learning fashion [1]}. Mazumder et al. [1]} proposed an approach which uses self-supervised auxiliary tasks to produce highly discriminative generic features from image datasets. The auxiliary task is a two level rotation of patches including inside the image and rotation of the whole image and assigning one out of 16 rotation classes to the modified image. When these tasks are trained simultaneously with the main classification task, the network learns high-quality generic features that help improve the few-shot classification performance. Such methods actually utilize the concept of gradient/ optimization-based meta-learning approach of few-shot learning. Accordingly, a related work by Tripathi et al. [3]} further integrated both induction and transduction into the base learner in an optimization-based meta-learning framework. On the other hand Ravi and Larochelle [4]}, rather than training a single model over multiple episodes, introduced an LSTM meta-learner which learns to train a custom model for each few-shot episode.
|
w
|
dc5483e0-222a-400f-9311-62401ede3853
|
A simpler and more efficient approach to few-shot learning is prototypical networks which is a metric learning. The main idea is that there exists an embedding in which points cluster around a single prototype representation for each class. Hence, the networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Prototypical networks learn a non-linear mapping of the input into an embedding space using neural networks and take a class's prototype to be the mean of its support set in the embedding space. Classification is then performed for an embedded query point by simply finding the nearest class prototype [1]}. Building on prototypical networks, Fort [2]} extended to Gaussian prototypical networks incorporating a Gaussian covariance matrix, where the network constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights.
|
w
|
3ede352b-5315-4936-a883-b37c98854b32
|
This study addressed offline handwritten Amharic character recognition using few-shot learning for the first time. As a baseline method, this study implemented prototypical networks which is an embedding and metric based few-shot learning method. From the opportunities of Amharic alphabet having row-wise and column-wise similarities, a novel way of augmenting the training episodes is explored as a proposed method. The results of the study revealed that the proposed method outperformed the baseline method by a significant margin in a 5-way 1-shot few-shot learning setting. This study has also proven how formulation of training episodes using related auxiliary tasks could affect the performance of few-shot learning methods. The dataset prepared by this study is another important contribution for Amharic few-shot learning research by bringing a more suitable and realistic dataset which can be used by other researchers.
|
d
|
e3f782da-0c20-4681-b062-620c83acd5f6
|
Even though this study experimented with few-shot learning in different settings by varying the shots, the effects of varying ways remain unexplored. Hence, future studies can focus on extending the experiments to find the most optimum few-shot learning setting. Studying the formulation of training episodes from the combination of character, row, and column labels is also an important area of research to progress in few-shot learning for Amharic handwritten character recognition.
|
d
|
b9407809-720e-4b77-9186-2799a74d5317
|
Yaqing Wang, Quanming Yao, James T Kwok, and Lionel M Ni,
“Generalizing from a few examples: A survey on few-shot learning”,
ACM computing surveys (csur), vol. 53, no. 3, pp. 1–34, 2020.
Joaquin Vanschoren,
“Meta-learning: A survey”,
arXiv preprint arXiv:1810.03548, 2018.
Efrem Yohannes Obsie, Hongchun Qu, and Qingqin Huang,
“Amharic Character Recognition Based on Features Extracted by CNN
and Auto-Encoder Models”,
in 2021 The 13th International Conference on Computer Modeling
and Simulation, 2021, pp. 58–66.
Birhanu Hailu Belay, Tewodros Habtegebrial, Marcus Liwicki, Gebeyehu Belay, and
Didier Stricker,
“A Blended Attention-CTC Network Architecture for Amharic Text-image
Recognition.”,
in ICPRAM, 2021, pp. 435–441.
Mesay Samuel Gondere, Lars Schmidt-Thieme, Durga Prasad Sharma, and Randolf
Scholz,
“Multi-script handwritten digit recognition using multi-task
learning”,
Journal of Intelligent & Fuzzy Systems, vol. 43, no. 1, pp.
355–364, 2022.
Mesay Samuel Gondere, Lars Schmidt-Thieme, Durga Prasad Sharma, and
Abiot Sinamo Boltena,
“Improving Amharic Handwritten Word Recognition Using Auxiliary
Task”,
arXiv preprint arXiv:2202.12687, 2022.
Yuqing Hu, Vincent Gripon, and Stéphane Pateux,
“Leveraging the feature distribution in transfer-based few-shot
learning”,
in International Conference on Artificial Neural Networks.
Springer, 2021, pp. 487–499.
Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci,
Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine
Manzagol, et al.,
“Meta-dataset: A dataset of datasets for learning to learn from few
examples”,
arXiv preprint arXiv:1903.03096, 2019.
Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto,
“A baseline for few-shot image classification”,
arXiv preprint arXiv:1909.02729, 2019.
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin
Huang,
“A closer look at few-shot classification”,
arXiv preprint arXiv:1904.04232, 2019.
Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum,
“The Omniglot challenge: a 3-year progress report”,
Current Opinion in Behavioral Sciences, vol. 29, pp. 97–104,
2019.
Chelsea Finn, Pieter Abbeel, and Sergey Levine,
“Model-agnostic meta-learning for fast adaptation of deep
networks”,
in International conference on machine learning. PMLR, 2017,
pp. 1126–1135.
Alex Nichol, Joshua Achiam, and John Schulman,
“On first-order meta-learning algorithms”,
arXiv preprint arXiv:1803.02999, 2018.
Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, et al.,
“Siamese neural networks for one-shot image recognition”,
in ICML deep learning workshop. Lille, 2015, vol. 2.
Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al.,
“Matching networks for one shot learning”,
Advances in neural information processing systems, vol. 29,
2016.
Jake Snell, Kevin Swersky, and Richard Zemel,
“Prototypical networks for few-shot learning”,
Advances in neural information processing systems, vol. 30,
2017.
Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel,
“A simple neural attentive meta-learner”,
arXiv preprint arXiv:1707.03141, 2017.
Pratik Mazumder, Pravendra Singh, and Vinay P Namboodiri,
“Improving few-shot learning using composite rotation based
auxiliary task”,
in Proceedings of the IEEE/CVF Winter Conference on Applications
of Computer Vision, 2021, pp. 2654–2663.
Ardhendu Shekhar Tripathi, Martin Danelljan, Luc Van Gool, and Radu Timofte,
“Few-Shot Classification By Few-Iteration Meta-Learning”,
arXiv preprint arXiv:2010.00511, 2020.
Sachin Ravi and Hugo Larochelle,
“Optimization as a model for few-shot learning”,
2016.
Stanislav Fort,
“Gaussian prototypical networks for few-shot learning on omniglot”,
arXiv preprint arXiv:1708.02735, 2017.
Etienne Bennequin,
“easyfsl”,
"https://github.com/sicara/easy-few-shot-learning".
Timothy M Hospedales, Antreas Antoniou, Paul Micaelli, and Amos J Storkey,
“Meta-learning in neural networks: A survey”,
IEEE transactions on pattern analysis and machine intelligence,
2021.
<FIGURE>
|
d
|
2ad4d144-99e4-47c3-98bb-68fbdc603e31
|
Mesay Samuel is a senior lecturer and PhD candidate at the faculty of Computing and Software Engineering, Arba Minch University, Ethiopia under a joint program [Arba Minch University-University of Hildesheim]. He received his MSc from Jimma University, Ethiopia in Knowledge Management (2013) and his BSc from Bahir Dar University, Ethiopia in Computer Science (2008). His current research interests include machine learning, deep learning, optical character recognition, expert systems and knowledge management.
<FIGURE>
|
d
|
daa91f45-1173-44d4-bc7f-e9e5a73a3fc3
|
Prof. Dr. Dr. Lars Schmidt-Thieme is professor for Machine Learning, heading the Information Systems and Machine Learning Lab (ISMLL) at the Institute for Computer Science, University of Hildesheim since 2006. Before that, he has been assistant professor at the Institute for Computer Science at University of Freiburg from 2003 to 2006. He graduated with a PhD in Economics and Management with a thesis about frequent pattern mining in 2003 from University of Karlsruhe and with a diploma (now called master) in Mathematics in 1999 from University of Heidelberg. His research interests are supervised machine learning for complex predictors and complex decision, i.e., for all problems whose instances cannot be described naturally by a set of attributes, i.e., recommender systems, relational learning problems, time series classification etc.
<FIGURE>
|
d
|
c980db64-3872-4add-9afd-fa75068437ad
|
Prof. Dr. Durga Prasad Sharma (DP Sharma) is associated with AMUIT MOEFDRE under UNDP, MSRDCMAISM(RTU), & Academic Ambassador, Cloud Computing (AI), IBM, USA. Prof. Sharma is a digital diplomat, computer scientist, strategic innovator, and international orator. He is the recipient of 52 National and International Awards and a wide range of appreciations including India's one of the highest civilian Awards “Sardar Ratna Life Time Achievements International Award- 2015” (in memory of the first Deputy Prime Minister of Independent India Sardar Vallabhbhai Patel). He has published more than a dozen books (i.e., 16 Text Books & 6 distance education book series as writer and editor) on various themes of Computer & IT and 131 International research papers/ articles (Print & Digital) in refereed International Journals/conferences. He has 26 years of experience in academic, research, and professional consultancy services. He has served Mission Publiques for Internet Governance Forum under United Nations Convention. Prof Sharma has delivered 43 keynote speeches at numerous international conferences held in Canada, the USA, China, South Korea, Malaysia, India, and other countries.
<FIGURE>
|
d
|
422c4ba4-9881-451a-95e7-e24f28e7ca97
|
Dr. -Ing Abiot Sinamo is currently a Director General for ICT Sector under Ministry of Innovation and Technology of the Federal Democratic Republic of Ethiopia. He has got his PhD degree from Oldenburg University, Germany for the specializations in the areas of Intelligent Systems and ERP systems. He has assumed the post of Dean of School of Computing in Mekelle University and delivered several courses for graduate and post graduate students for a total of 17 years. He also advised number of MSc thesis works and is also co-advising two PhD works. He has published several papers in the areas of Natural Language Processing, Machine Learning, Artificial Intelligence, Knowledge Representation, Cloud Computing, Computer Vision, Software Testing, ERP Adoption, and etc in which he is interested to further his research carriers.
<FIGURE>
|
d
|
27d151c5-2d67-4376-b40f-7cf24da7425f
|
Abey Bruck is a lecturer at the faculty of Computing and Software Engineering, Arba Minch University, Ethiopia. He received his MSc from Addis Ababa University, Ethiopia in Information Science (2013) and his BSc from Arba Minch University, Ethiopia in Computer Science and IT (2008). His current research interests include machine learning, deep learning, and robotics.
|
d
|
d195e02c-b29e-46af-95ac-a4ff24860828
|
Learning from high-dimensional data remains a challenging task. Particularly for reinforcement learning (RL), the complexity and high dimensionality of the Markov Decision Process (MDP) state often leads to complex or intractable solutions. A direct application of RL on high-dimensional input spaces therefore typically yields instabilities and poor performance. In order to still facilitate learning from high-dimensional input data, an encoder architecture can be used to compress the inputs into a lower-dimensional latent representation. To this extent, a plethora of work has successfully focused on discovering low-dimensional representations that accommodate the underlying features for the task at hand [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}.
|
i
|
7c7eaa2c-3e30-40ef-a7a6-c117c2b5ae21
|
The resulting low-dimensional representations however tend to seldom contain specific disentangled features, which leads to disorganized latent information. This means that the latent states can represent the information from the state in any arbitrary way, leading to non-optimal interpretability. In line with structuring a latent representation, [1]} have shown notions and use of interpretability in MDP representations. When expanding this notion of interpretability to be compatible with RL, it has been argued that the agent's state should be an important element of a latent representation, since it generally represents what is controllable by the policy. In this light, [2]} have introduced the concept on isolating and disentangling controllable features in a low-dimensional maze environment, by means of a selectivity loss. Furthermore, [3]} took an object-centric approach to isolate distinct controllable objects. Controllable features however only represent a fragment of an environment, where in many cases the uncontrollable features are of equal importance. For example, in the context of a distribution of mazes, for the prediction of the next controllable (agent) state following an action, the information about the wall structure is crucial (See Fig. REF ). A representation would therefore benefit from incorporating controllable ánd uncontrollable features, preferably in a disentangled, interpretable arrangement.
|
i
|
f9e7a7b0-1ba6-4a3d-a1f7-2991a3c6b91a
|
In an MDP setting, we show that a latent representation can be disentangled into two parts, where one part is designed to contain controllable features and the other part is designed to contain uncontrollable features.
This allows for a precise and visible separation of the latent features, improving interpretability, representation quality and possibly moving towards a basis for building causal relationships between an agent and its environment.
The learning algorithm consists of both an action-conditioned and a state-only forward predictor, along with an entropy and an adversarial loss, which reliably isolate and disentangle the controllable versus the non-controllable features. Furthermore, we show that learning and planning can achieve strong performance when it is applied on the human-interpretable disentangled latent representation. An implementation of the algorithm is available on https://github.com/Jacobkooi/-Un-Controllable_Features.
<FIGURE>
|
i
|
a666051e-d099-4151-97c1-8512d9205d1a
|
In this section, we showcase the disentanglement of controllable and uncontrollable features on three different environments: (i) a quadruple maze environment, (ii) the catcher environment and (iii) a random maze environment. The first environment is relatively simple and is used to showcase the algorithm's ability to disentangle low-dimensional latent representations. The catcher environment examines a setting where the uncontrollable features are not static, and the random maze environment is used to showcase disentanglement in a more complex distribution of environments, followed by the application of downstream tasks with learning and planning.
The base of the encoder is derived from [1]} and consists of two convolutional layers, followed by a fully connected layer for low-dimensional latent representations or an
additional CNN for a higher-dimensional latent representation such as a feature map. For the full network architectures, we refer the reader to Appendix .
In all environments, the encoder \(f(s;\theta _{enc})\) is trained from a buffer \(\mathcal {B}\) filled with transition tuples \((s_{t}, a_{t}, r_{t}, s_{t+1})\) using randomly sampled actions \(a_{t} \in \mathcal {A}\) .
|
m
|
27819c41-4716-4817-a40a-54ad2dfc6280
|
Many works have focused on converting high-dimensional image inputs to a compact, abstract representation to improve generalization and performance. Learning this representation can make use of auxiliary tasks in addition to the pure RL objectives [1]}. One way to ensure a meaningful latent space is to implement architectures that require a pixel reconstruction loss such as a variational [2]}, [3]} or a deterministic [4]} autoencoder. Others combined basic pixel reconstruction with latent planning [5]}, [6]} or prediction [7]}, [8]}. Although reconstruction losses prevent latent collapse and ensure a rich latent space, it also facilitates the reconstruction of task-irrelevant noise, thus possibly keeping irrelevant features in the latent space.
|
w
|
401cf426-6289-45fd-ad33-38b98ae667be
|
More closely related to our work is the work by [1]}, which connects individual latent features to independently controllable states in a maze using a reconstruction loss and a selectivity loss. The work by [2]} visualizes the representation of an agent and its transitions in a maze environment, but does not disentangle the agent state in its controllable and uncontrollable parts, which limits the interpretability analysis and does not allow simplifications during planning. The work by [3]} uses an object-oriented approach to isolate different controllable features, using graph neural networks (GNN's) and a contrastive forward prediction loss. A deeper study of predictive losses by [4]} shows the limitations and benefits of different predictive losses in control-dominant environments, albeit in a stochastic mutual information (MI) setting. Slightly related are also the works by [5]}, [6]}, who try to focus on the controllable features of an environment with inverse-prediction losses and use these features to guide exploratory behaviour. Lastly, sharing similarity in terms of the separation of the latent representation, [7]} use a reconstruction-based adversarial architecture that divides their latent representation into reward relevant and irrelevant features.
<FIGURE>
|
w
|
200afb5c-e923-4153-a8cc-d809463e9578
|
Vector quantization VAEs are prone to codebook collapse, a phenomenon of vanishing codebook utilization and vanishing entropy, which has negative effects on robustness of dimensionality reduction and number of features learned [1]}. In some works it is addressed by random restarts of dead (inactive) codes [2]}. In this paper we propose a controlled entropy regularization mechanism through topology adjustments of the quantized latent space.
|
i
|
fbf04036-3cf0-4576-b116-67770cc0b04d
|
As described previously, periods of decreased VQ entropy appear commonly throughout training. The phenomenon is similar in nature to posterior collapse observed in VAEs. This constitutes the unwanted convergence of latent space \(q_\phi (z \mid x)\) towards the prior such that a VAE effectively stops receiving any new informative features from the input data: \(\exists i\) s.t. \(\forall \mathbf {x} q_\phi \left(z_i \mid \mathbf {x}\right) \approx p\left(z_i\right)\)
|
i
|
23b507bb-9fc9-47d0-845a-2bdf8fe9116c
|
Heuristically, overly large latent entropy with a high \(\sigma \) over an interval may be a good indicator that the model is not converging, and struggling to find effective gradients for parts of the computation graph. Hence it makes sense to explore approaches of deriving entropy-inducing mechanism that would be structural, upper-bounded and would not undermine overall learning stability.
|
i
|
234957ff-1a1c-4dfe-8656-57f5381a22a3
|
The research was motivated by the theory of entropy coding, which was developed in the context of data compression. A classical work in signal processing that made the relation between entropy coding and vector quantization - ECVQ [1]} - proposed entropy-constrained vector quantization for the analog signal. We found the core idea behind this analog signal algorithm can be translated to dimensionality reduction performed in modern autoencoder architecture.
In that regard, parts of data compression theory (specifically, entropy coding and source coding theorem) are applied to VQ while dimensionality reduction itself is interpreted as a stochastic data compression process in this context.
|
i
|
fadcc7b6-faaa-47d3-885e-9b17638696e2
|
More formally, entropy coding is a process of compressing data while minimizing code (representational unit) length within limit of the lower bound of Shannon source coding theorem [1]} that defines bounds of lossless compression wrt the entropy of the source data:
\(\mathbb {E}_{x \sim P}[l(d(x))] \ge \mathbb {E}_{x \sim P}\left[-\log _b(P(x))\right]\)
|
i
|
a8a63e4b-3ba8-4e8e-84d6-6d2ccc743128
|
We make an assumption that we can generalize the theorem to the VQ codebook \(\mathcal {X}\) utilization rate as opposed to individual code length (as each element of the codebook has a fixed representational capacity), given that \(l(d(x)) \in \mathcal {X}\) . In other words, when we project the source coding theorem from entropy coding to VQ, our assumptions are: (1) higher codebook entropy corresponds to a more feature-rich (lossless) dimensionality reduction but can lead to decreased learning speed, while (2) lower codebook entropy corresponds to a faster learning rate (faster reconstruction convergence in VQ-VAE) but can lead to a lossier dimensionlaity reduction.
|
i
|
4c009bd2-b71d-4f3d-8409-3aeee0ec3c4a
|
However, we found that simply targeting increased codebook entropy undermines learning stability and meaningful gradient computation. While experimenting with various constrained entropy regularization methods we came to observe some persistent patterns linking entropy to particular topological features found in derived persistent homology. We found that a differentiable implemention of PH can be used to define topological parameters which invoke consistent entropy feedback without undermining the general stability of learning wrt to autoencoder reconstruction loss.
|
i
|
b7873239-17b2-4510-9471-057c04267c84
|
One of the major relevant works is the topological autoencoder [1]}, where a loss term wrt \( \left(\mathbf {A}^X, \mathbf {A}^Z, \pi ^X, \pi ^Z\right) \) was proposed, corresponding to the distance matrices \(A\) and topologically relevant persistent pairings \(\pi \) of input space \(X\) and latent space \(Z\) , respectively. Proposed topological autoencoder has proven to find consistent patterns in observed persistent homology (PH) of constructed Vietoris-Rips filtrations, strengthening the assumption that persistent homology can be applied to dimensionality reduction.
|
w
|
f003be8f-aa85-452c-bee5-b240a6e0299c
|
The research was also motivated by a work that researched PH learning [1]}, investigating whether neural networks are able learn PH features of \(F \circ \mathrm {PH}: \mathcal {X} \rightarrow \mathcal {D} \rightarrow \mathcal {Y}\) directly, where \(\mathcal {X}\) is the space of inputs, \(\mathcal {D}\) is the space of persistence diagrams, and \(\mathcal {Y}\) is the space of PH features. The results indicated that a CNN can efficiently approximate various types of persistence diagrams.
|
w
|
81b12ceb-ffac-4dc7-8abd-f40c436586f0
|
Two works addressing VQ codebook collapse - referred to as wav2vec [1]} and Jukebox [2]} after their model labels - pushed the research to further link codebook collapse and VQ entropy. We also make an assumption that higher entropy of VQ latents correlates inversely with (unwanted) overfit towards the identity [3]}, systemically observed in autoencoders, usually addressed by entropy-inducing measures such as dropout and denoising mechanism.
|
w
|
f5bb97b3-9c09-4664-95bb-61dc3c0e64ca
|
Ideas of correlating entropy coding to analog signal quantization, as mentioned, are presented in the ECVQ work that was published as early as 1989 [1]}. Success of the ECVQ design - where minimum distortion was effectively subjected to entropy constraints - sets the theoretical basis of our entropy-constrained VQ design.
|
w
|
626002db-44e0-44a3-9805-75294a7e1513
|
Finally, we borrow an analytical term (with the corresponding formula) from a work that defined persistent entropy [1]}. Persistent entropy refers to the Shannon entropy of the filtration \(\mathcal {D}\) , in other words it's an entropy measure of the persistent homology itself. Persistent entropy is interpreted as the degree of topological order in persistent homology, useful for discerning between a noisy diagram and a diagram containing structural topological features.
|
w
|
e84d8bb1-99e8-455d-afcf-cb8039230f49
|
We train two VQ-VAE models for 27000 steps each, one with HC-VQ term deactivated and one with HC-VQ term activated (participating in overall loss computation). Same dataseed is reused for producing predictable results of stochastic or pseudorandom functions.
|
m
|
e91475ad-88fa-45fd-9927-7f7ffd67499c
|
HC-VQ dry runs The following observations were made during the most complex (CelebA) dataset training. In \(\mathcal {S^{\mathcal {D}}}\) , \(\mu ^{0}\) (mean \(\epsilon \) between persistent pairings of critical simplices in dimension 0) was most correlated to entropy \(\Delta \) s at each interval. \(\mu ^{0}\) was increasing steadily as training went on:
<FIGURE><FIGURE>
|
m
|
bda1900b-6ad1-4da5-9211-8c1c8517d187
|
Comparisons to active HC-VQ term The following were CelebA training observations with HCVQ loss term active. A set of hyperparameters {\(T_{\beta } := 100 \) , \(T_{\mu } := 0.5\) } influenced the \(E_{z}\) entropy parameter rather drastically:
<FIGURE>
|
m
|
e81a7925-1e9b-4d4d-a5c3-9c534a56eec5
|
We found that HC-VQ term technique was able to efficiently increase codebook utilisation (measured by \(E_{z}\) ) during VQ-VAE training process compared to dry runs. Heuristically, we observed sparse persistent homology and (stable) high persistent entropy, strengthening the assumption that HC-VQ embedding retained more features compared to VQ-VAE training with the same codebook size.
|
d
|
8ab27edc-b9fb-470a-8baa-42cbb5f96371
|
The biggest drawback of HC-VQ would be numerical stability, which can be mitigated by fitting an appropriate \(\lambda \) scaling hyperparameter and decreasing hyperparameters \(T_{\beta }, T_{\mu }\) (at the cost of decreased learning speed).
|
d
|
eb249e64-11d2-46d7-b0f0-315b7a0e1f9f
|
[I]t may easily happen that other, perhaps in some sense simpler, lattices also have the properties that are required from \(L\) to complete the proof...
There are different reasons which may motivate the search for such a lattice: to make the proof deterministic; to improve the factor in the approximation result; to make the proof simpler.
Miklós Ajtai, [1]}
|
i
|
4bada40b-539b-424a-b2ef-6775dcd5a238
|
A lattice \(\) is the set of all integer linear combinations of some \(n\) linearly independent vectors \(\vec{b}_1, \ldots , \vec{b}_n \in ^m\) . The matrix \(B = (\vec{b}_1, \ldots , \vec{b}_n)\) whose columns are these vectors is called a basis of \(\) , and \(n\) is called its rank.
Formally, the lattice \(\) generated by \(B\) is defined as
\(= (B) := [\Big ]{ \sum _{i=1}^n a_i \vec{b}_i : a_1, \ldots , a_n \in } \ \text{.}\)
|
i
|
e631bd86-2ee0-48f4-b7e0-182888dfd2ef
|
Lattices are classically studied mathematical objects, and have proved invaluable in many computer science applications, especially the design and analysis of cryptosystems.
Indeed, the area of lattice-based cryptography, which designs cryptosystems whose security is based on the apparent intractability of certain computational problems on lattices, has flourished over the past quarter century. (See [1]} and its bibliography for a comprehensive summary and list of references.)
|
i
|
5d910a3e-5f70-40ae-814d-24a46626804e
|
The central computational problem on lattices is the Shortest Vector Problem (\(\) ): given a lattice basis \(B\) as input, the goal is to find a shortest non-zero vector in \((B)\) .
This paper is concerned with its \(\gamma \) -approximate decision version in the \(\ell _p\) norm (\(\gamma \) -\(_p\) ), where \(p \ge 1\) is fixed and the approximation factor \(\gamma = \gamma (n) \ge 1\) is some function of the lattice rank \(n\) (often a constant).
Here the input additionally includes a distance threshold \(s > 0\) , and the goal is to determine whether the length (in the \(\ell _p\) norm) \(\lambda _1^{(p)}() := \min _{\vec{v} \in \setminus {\vec{0}}} {\vec{v}}_p\) of the shortest non-zero vector in \(\) is at most \(s\) , or is strictly greater than \(\gamma s\) , when one of the two cases is promised to hold.
For the exact problem, where \(\gamma = 1\) , we often simply write \(_p\) .
|
i
|
544ab741-b911-426a-ab76-81867225ac18
|
Motivated especially by its central role in the security of lattice-based cryptography, understanding the complexity of \(\gamma \) -\(\) has been the subject of a long line of work.
In an early technical report, van Emde Boas [1]} initiated the study of the hardness of lattice problems more generally, and in particular showed that \(_{\infty }\) is \(\) -hard.
Seventeen years later, Ajtai [2]} finally showed similar hardness for the important Euclidean case of \(p = 2\) ,
i.e., he showed that exact \(_2\) is \(\) -hard, though under a randomized reduction.
Subsequent work [3]}, [4]}, [5]}, [6]}, [7]}, [8]} improved this by showing that \(\gamma \) -\(_p\) in any \(\ell _p\) norm is \(\) -hard to approximate for any constant \(\gamma \ge 1\) , and hard for nearly polynomial factors \(\gamma = n^{\Omega (1/\log \log n)}\) assuming stronger complexity assumptions, also using randomized reductions.
Recent work [9]}, [10]} has also shown the fine-grained hardness of \(\gamma \) -\(_p\) for small constants \(\gamma \) (again under randomized reductions).
On the other hand, \(\gamma \) -\(_p\) for finite \(p \ge 2\) is unlikely to be \(\) -hard for approximation factors \(\gamma \ge C_p \sqrt{n}\) (where \(C_p\) is a constant depending only on \(p\) ) [11]}, [12]}, [13]}, and the security of lattice-based cryptography relies on the conjectured hardness of \(\) or other problems for even larger (but typically polynomial) factors.
|
i
|
188cefde-e84e-4157-b1ee-4bec98bb37cd
|
While this line of work has been very successful in showing progressively stronger hardness of approximation and fine-grained hardness for \(\gamma \) -\(_p\) , it leaves some other important issues unresolved.
First, for \(p \ne \infty \) the hardness reductions and their analysis are rather complicated, and second, they are randomized.
Indeed, it is a notorious, long-standing open problem to prove that \(_p\) is \(\) -hard, even in its exact form, under a deterministic reduction for some finite \(p\) .
While there have been some potential steps in this direction [1]}, [2]}, e.g., using plausible number-theoretic conjectures that appear very hard to prove, there has been no new progress on this front for a decade.
|
i
|
85fce6be-432c-4508-9d5c-646da6c8c5f4
|
Along with ongoing changes to regulatory approval processes for software [1]}, machine learning is being increasingly used within healthcare. Machine learning can be broadly divided into supervised learning, unsupervised learning, and reinforcement learning. Supervised learning requires a dataset where the outcome of interest is known, and results in a model which categorizes data points, e.g., as images of skin cancer or CT scans, by finding correlative or discriminative relationships in fully labelled data [2]}. For example, supervised learning has been applied to identify hospitalized patients who are at an increased risk of death using collected data of vital signs and patient outcomes [3]}. Clinical data can also be leveraged to uncover hidden patterns with unsupervised learning. Unlike supervised learning which is used to obtain a predictive model, unsupervised learning can be used to better understand patient data. For example, unsupervised learning was recently leveraged to identify whether there might be distinct clinical phenotypes of patients hospitalized with sepsis [4]}. While predictions made by a supervised learning model can be used to decide the next course of action, supervised learning does not find the optimal sequence of actions directly, since the effect of an action taken at a given time step is not independent of subsequent actions. Reinforcement learning therefore lends itself well to problems of sequential decision making where the effect of actions may extend over an unknown time duration into the future. Reinforcement learning, the focus of this article, attempts to make a sequence of decisions to achieve a given goal. Unlike supervised or unsupervised methods, it is used to identify the optimal sequence of actions based on their subsequent effect on the state (e.g., a patient’s current condition).
In this survey, we highlight the key challenges that arise when using RL to learn improved treatment strategies from medical records, and point the reader towards salient directions for future research.
|
i
|
e34a5815-7df2-4a0b-8d79-fef75e398745
|
While reinforcement learning provides a natural solution for learning improved policies for sequential tasks,
its application in healthcare introduces difficulties with regards to describing the state and action space, learning and evaluating policies from observational data, and designing reward functions. Since the chosen action and state representations affect the learned policy as well as its estimated value, off-policy evaluation may not be enough to reveal shortcomings of the learned policy. Additionally, when using observational data to learn a model of patient progression, one must be careful about distributional shifts incurred by a change in policy which might invalidate the model in under-explored areas of the state space. A potential area for future research is the development of standards or heuristics to validate and diagnose learned policies. Such methods should be centered around how to account for rare or unseen states, and understanding the effect of the chosen state and action representations. For instance, a continuous state representation requires a parameterized \(Q\) -function which is more susceptible to extrapolation errors; however, a discretized state space and a tabular \(Q\) -function may result in a loss of granularity. How should designers interpret differences in the learned policy under different design choices? What kinds of improvement should researchers reasonably expect given the limit of observed state-action pairs? Given the safety-critical nature of recommending treatment actions, validating learned policies will require more than general guidelines if RL is to become a widely adopted approach for personalized treatment.
|
d
|
1d6d69b5-94a5-4a4d-9f7d-701300fefa0a
|
Though deep convolutional neural networks (CNNs) are prevailing, it comes at the cost of huge computational burden and large power consumption, which poses a great challenge for real-time deployments on resource-limited devices such as cell phones and Internet-of-Things (IoT) devices. To address this problem, model compression has become an active research topic, which aims to reduce the model redundancy with a comparable or even better performance in comparison with the full model, such that the compressed model can be easily run on resource-limited devices.
|
i
|
24cc4ed6-946d-4605-8056-d0caae098e38
|
General methods for reducing the model size can be roughly categorized into five groups: (1) Low-bit quantization aims to compress a pre-trained model by reducing the number of bits used to represent the weight parameters of the pre-trained models [1]}, [2]}, [3]}. (2) Compact networks such as ShuffleNets [4]}, [5]}, MobileNets [6]}, [7]}, [8]} and GhostNet [9]}, directly design parameter-efficient neural network models. (3) Tensor factorization approximates the weight tensor with a series of low-rank matrices, which are then organized in a sum-product form [10]}, [11]}.
(4) Network pruning removes a certain part of the network. According to the pruning granularity, existing methods include weight pruning [12]}, [13]}, block pruning [14]}, [15]}, row/column pruning[16]}, [17]}, kernel pruning [18]}, [19]}, pattern pruning [20]}, [21]}, filter pruning[22]}, [23]}, etc.
|
i
|
3318d7ad-a5c2-4101-8f33-b4d5229efbc6
|
In this paper, we focus on filter pruning for efficient image classification, which has received ever-increasing focus due to the following advantages:
1) The pruned model is structured, which can be well supported by regular hardware and off-the-shelf basic linear algebra subprograms (BLAS) library.
2) The storage usage and computational cost are significantly reduced in online inference.
3) It can be further combined with other compression methods, such as network quantization, tensor factorization, and weight pruning, to achieve a deeper compression and acceleration.
Despite the extensive progress [1]}, [2]}, [3]}, [4]}, [5]} made in the literature, two essential issues remain as open problems in the filter pruning, i.e., the pruned network structure and the filter importance measurement.
|
i
|
b46125cb-1c8c-4c46-889e-e5736df3f20e
|
As the first issue, the pruned network structure is related to the per-layer pruning rate. Setting these pruning rates for different layers has shown to significantly affect the final performance [1]}, [2]}, [3]}. To this end, existing methods resort to a series of complex learning steps, many of which focus on training from scratch with additional sparsity constraints. For instance, methods in [4]}, [5]} employ joint-retraining with sparse requirements on the scaling factors of batch normalization layers, and the pruning rate in each layer relies on a given threshold. Huang et al. [6]} proposed to train CNNs with the 0-1 mask on each filter and the percentage of 1s in each layer makes up of the pruned network structure. The method in [7]} takes previous activation responses as inputs and generates a binary index code for pruning. Similar to [6]}, the pruned network structure consists of the ratio of trained non-zero indexes. Dynamic pruning [9]} incorporates a feedback scheme to reactivate the pruned filters, which thus achieves dynamic allocation of the sparsity in each layer. Another group [10]}, [11]} requires human experts to designate the layer-wise pruning strategy, which is simple but quantitatively suboptimal. More recent works [2]}, [13]}, [14]}, [3]} focus on search-based strategies, typically through network architecture search [13]}, one-shot architecture search [14]}, or heuristic-based search algorithms such as evolutionary algorithm [2]} and artificial bee colony [3]}. Although search-based methods generally result in a better network structure, their search progress is extremely time-consuming.
|
i
|
aad34761-9752-4f65-95a7-48bd573ca3e9
|
As the second issues, the filter importance measurement identifies which filters in the pre-trained model should be preserved and inherited to initialize the pruned network structure. Existing works focus on measuring the individual filter importance. To this end, many of them resorts to preserving the most “important” filters by a certain criterion to estimate the filter importance, such as magnitude-based [1]}, zero percentage of output activation [2]}, rank of feature map [3]}. However, the methods in [2]}, [3]} are data-driven and add complexity in evaluation, and the method in [1]} is more effective in weight pruning [7]}, [8]} rather than filter pruning as demonstrated in [9]}. Besides, these methods usually require layer-wise fine-tuning to improve inference accuracy, which is also time-consuming. Training-from-scratch methods [1]}, [11]}, [12]}, [13]}, [14]} preserve the weights of non-zero masked filters or filters with fewer sparse factors for the follow-up fine-tuning. The methods in [15]}, [16]}, [17]} adopt a random measurement to assign filter weights with random Gaussian distribution, or randomly pick up some of the pre-trained filter weights. Besides, methods in [16]}, [19]} also require to train a large auxiliary network to predict the weights of potential pruned network structure, making the pruning more complex.
|
i
|
7f7cba90-8fd5-4248-bed6-643c35575bed
|
In this paper, we propose a novel pruning method, termed CLR-RNF, which consists of two components of CLR and RNF to respectively solve the above two problems. The former aims to efficiently find the optimal pruned network structure and the latter targets to select a subgroup of important filters to initialize the pruned network structure such that the pruned model performance can be effectively recovered.
To find the optimal pruned network structure, we adopt the effective magnitude-based criterion in weight pruning [1]}, [2]} and introduce a cross-layer ranking (CLR) of weights. As a result, the pruned network structure with our filter pruning scheme also benefits from the per-layer sparsity employed in weight pruning. For the first time, we reveal the “long-tail” pruning problem in the magnitude-based weight pruning as illustrated in Fig. REF , and propose a computation-aware measurement of weight importance to effectively address the inefficiency in network pruning caused by the long-tail.
To select a subgroup of important filters, instead of selecting filters based on their individual importance, we prefer to measuring the collective importance of a filter group for selection, which is based on our insight that per-layer filters are involved in a coalition to achieve a desired performance. Specifically, each filter in the pre-trained model would recommend a group of its closest filters which have a high potential to be inherited by the pruned model. Correspondingly, a k-reciprocal nearest filter (RNF) selection is proposed to pick up filters that fall into the intersection of all the recommended groups as the final inherited filters. Both our pruned network structure and filter selection are non-learning, which thus greatly simplifies the complexity in filter pruning.
<FIGURE>
|
i
|
686e93fa-c050-4152-95ae-257e273abcc5
|
For the first time, we reveal the “long-tail” pruning problem in the magnitude-based weight pruning which degrades the efficacy of a pruned network, and propose a new computation-aware measurement to effectively address the problem.
We propose to treat the per-layer sparsity in the cross-layer ranking of weight pruning as the per-layer pruning rate for filter pruning. To the best of our knowledge, this is the first work that utilizes the linkage between filter pruning and weight pruning.
We propose a novel recommendation-based filter selection scheme based on the k-reciprocal nearest neighbors recommended by individual filters in a layer. The method selects a group of filters by taking into account the overall collective importance of the filter group, rather than the importance of individual filters.
|
i
|
53caf1b7-d632-46f6-8ddb-639d4b543930
|
The rest of this paper is organized as follows: In Sec. , we discuss the related work. Details of our proposed CLR-RNF are elaborated in Sec. . Sec. presents the experimental results. Finally, we conclude this paper in Sec. .
<FIGURE>
|
i
|
e1fcb5fd-c093-4c8d-94fa-a7020466f893
|
Weight Pruning. In contrast to filter pruning, weight pruning pursues to remove individual neurons in the weight tensors of a neural network by a certain criterion or training technique, such as second-order Taylor expansion [1]}, second-order derivative [2]}, \(\ell _2\) -regularization [3]}, global sparse momentum SGD [4]}, and magnitude of weight value [5]}, [6]}, [7]}, [8]}. After removing the neurons, the weight tensors become highly sparse and the memory can be reduced by arranging the model in a sparse format. Specialized hardware and software are thus required to achieve practical speedups. Differently, we focus on filter pruning, but aim to make full use of the magnitude-based weight ranking to derive the pruned network structure.
|
w
|
fdb6b2f1-5205-4c77-9540-af4311aaa8cd
|
Neural Architecture Search. Recently, neural architecture search (NAS) has attracted increasing attention [1]}. It aims to design a network architecture in an automated way with as little human intervention as possible, typically through reinforcement learning [2]}, evolutionary learning [3]}, differentiable search [4]} and so on. Similar to NAS, recent arts resort to search-based strategies for the pruned network structure [5]}, [6]}. Differently, the search space of NAS is broad (operations, filter number, and network depth, etc.) and is defined distinctively across different works. On the contrary, filter pruning focuses on the decision of per-layer filter number to produce a subnet of a given network, which can be seen as a simplified version of architecture search.
|
w
|
ef358173-467e-43d7-9599-e597337248b4
|
We proposed a novel filter-level network pruning method, called CLR-RNF, involving two non-learning methodologies, cross-layer ranking (CLR) and \(k\) -reciprocal nearest filter (RNF), that aim to find the optimal pruned network structure and locate a filter subset with better collective importance. To this end, we first revealed the “long-tail” pruning problem in the magnitude-based weight pruning and proposed a cross-layer ranking strategy to remove the least important weights ranked by the importance of individual weights. Furthermore, instead of considering individual filter importance like most previous works, we have devised a recommendation-based filter selection to pick filters with best collective importance. Each filter in the pre-trained model would recommend a group of its closest filters as the potential candidates. Then, the \(k\) -reciprocal nearest filters that fall into the intersection of different recommendation sets are selected. Extensive experiments on CIFAR-10 and ImageNet demonstrate the efficiency and effectiveness of our new perspective of network pruning.
|
d
|
7b1692d3-c4dc-4312-b78b-ed6deb07109b
|
Dr. Lin served as Distinguished Lecturer of IEEE Circuits and Systems Society from 2018 to 2019, a Steering Committee member of IEEE Transactions on Multimedia from 2014 to 2015, and the Chair of the Multimedia Systems and Applications Technical Committee of the IEEE Circuits and Systems Society from 2013 to 2015. His articles received the Best Paper Award of IEEE VCIP 2015 and the Young Investigator Award of VCIP 2005. He received Outstanding Electrical Professor Award presented by Chinese Institute of Electrical Engineering in 2019, and Young Investigator Award presented by Ministry of Science and Technology, Taiwan, in 2006. He is also the Chair of the Steering Committee of IEEE ICME. He has served as a Technical Program Co-Chair for IEEE ICME 2010, and a General Co-Chair for IEEE VCIP 2018, and a Technical Program Co-Chair for IEEE ICIP 2019. He has served as an Associate Editor of IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Multimedia, IEEE Multimedia, and Journal of Visual Communication and Image Representation.
|
d
|
6dfb4ea2-6d4c-419e-aa70-9b8be3d672ba
|
Deep learning has been a powerful tool for processing big data from many fields of science and technology [1]}.
It started with an important family of deep network architectures called deep convolutional neural networks
(DCNNs) which are very efficient for speech recognition, image classification, and many other practical tasks [2]}, [3]}.
Compared with their great success in practice and some analysis of algorithms for training parameters like stochastic gradient descent,
DCNNs are not fully understood yet in terms of their approximation, modelling and generalization abilities.
Recently we confirm universality of DCNNs in [4]} and show in [5]}
that DCNNs can perform in representing functions at least as well as fully connected neural networks.
But it is open in general whether they can perform better in learning and approximating some classes of functions
with special structures used in practical applications, though there have been some attempts in [6]}, [5]}, [8]}.
|
i
|
a3465882-d875-4053-a976-29e072dd4570
|
The first purpose of this paper is to answer the above open question by proving in Theorem REF below that
DCNNs followed by one fully connected layer can approximate radial functions \(f(|x|^2)\) much faster than fully connected shallow neural networks
where \(|x| = \sqrt{x_1^2 + \ldots + x_d^2}\) is the norm of an input vector \(x=(x_1, \ldots , x_d) \in \mathbb {R}^d\) .
In fact, we present dimension-independent rates of approximating radial functions in Theorem REF below.
Moreover, we develop a theory of DCNNs for approximating efficiently functions of the form
\(f \circ Q (x) =f(Q(x))\) with a polynomial \(Q\) on \(\mathbb {R}^d\) and a univariate function \(f\) , both unknown.
Radial functions have such a form with a known quadratic polynomial \(Q(x) = x_1^2 + \ldots + x_d^2\) . They
arise naturally in statistical physics, early warning of earthquakes, 3-D point-cloud segmentation, and image rendering,
and their learning by fully connected neural networks was studied in [1]}, [2]}, [3]}.
|
i
|
d72739ad-6b81-4372-84ea-3738a886630b
|
The second purpose of this paper is to conduct generalization analysis of a learning algorithm for regression induced by DCNNs
and to show for regression functions of the form \(f \circ Q\) that the rates of estimation error (which equals the excess generalization error)
decrease to some optimal value and then increase as the depth of the deep network becomes large.
This is consistent with observations made in many practical applications of deep neural networks.
|
i
|
3f8de2f5-097a-410e-b27c-7691b1282357
|
Our last purpose is to show that DCNNs in our network structure determined completely by two parameters
can automatically extract features and make use of the composite nature of the target function in learning for regression via tuning values of the two parameters, though our network structure is generic and does not use any composite information or the functions \(Q\) and \(f\) . The activation function for our networks is the rectified linear unit (ReLU) \(\sigma \) given by
\(\sigma (u) = \max \lbrace u, 0\rbrace \) for \(u\in \mathbb {R}\) .
|
i
|
9742f751-5d8c-4cdf-8027-d96ee1923dd7
|
A classical multi-layer fully connected neural network \(\lbrace h^{(j)}(x)\rbrace _{j=0}^J\) of widths \(\lbrace d_j \in \mathbb {N}\rbrace \) takes an iterative form with \(h^{(0)}(x)=x\in \mathbb {R}^d\) and
\(d_0 =d\) given by
\(h^{(j)}(x)=\sigma \left(F^{(j)}h^{(j-1)}(x)-b^{(j)}\right), \qquad j=1,\dots ,J,\)
|
i
|
b4651b76-f0d4-41bc-bf63-15b2d767b979
|
where \(b^{(j)}\in \mathbb {R}^{d_j}\) and \(F^{(j)}\) is a \(d_j \times d_{j-1}\) full connection matrix reflecting the fully connected nature.
The number \(d_j d_{j-1}\) of free parameters in the connection matrix \(F^{(j)}\) is too large when the input dimension \(d\) increases.
A core idea of deep learning is to reduce the number of free parameters at individual layers and channels by imposing special structures on the connection matrices.
The special structure imposed on DCNNs is induced by convolutions. The 1-D convolution of a sequence \(w=(w_k)_{k\in \mathbb {Z}}\) on \(\mathbb {Z}\) supported in
\(\lbrace 0, 1, \ldots , s\rbrace \) and another \(x=(x_k)_{k\in \mathbb {Z}}\) supported in \(\lbrace 1, 2, \ldots , D\rbrace \) is given by
\( \left(w{*} x\right)_i = \sum _{k\in \mathbb {Z}} w_{i-k} x_k = \sum _{k=1}^D w_{i-k} x_k, \qquad i\in \mathbb {Z}. \)
|
i
|
c0b24e00-ece7-4712-8a89-769ad66d6644
|
This is a sequence supported in \(\lbrace 1, 2, \ldots , D+s\rbrace \) . By restricting the index \(i\) onto this set, we know that
the possibly nonzero entries of the convoluted sequence \(w{*} x\) can be expressed in a vector form as
\(\left[\begin{array}{c}\left(w{*} x\right)_1 \\\left(w{*} x\right)_2 \\\vdots \\\left(w{*} x\right)_{D} \\\vdots \\\left(w{*} x\right)_{D+s}\end{array}\right]=T^{w} \left[\begin{array}{c}x_1 \\x_2 \\\vdots \\x_{D}\end{array}\right], \quad T^{w}:=\left[\begin{array}{ccccccc}w_0 & 0 &0&0&\dots &0&0\\w_1 &w_0 &0&0&\dots &0&0\\\vdots &\vdots &\ddots &\ddots &\ddots &\vdots &\vdots \\w_s &w_{s-1} &\dots &w_0 &\dots &0&0\\0&w_s &\dots &w_1 &\ddots &\vdots &0\\\vdots &\ddots &\ddots &\ddots &\ddots &\ddots &\vdots \\\dots &\dots &0&w_s &\dots &w_1 &w_0 \\\dots &\dots &\dots &0&w_s &\dots &w_1 \\\vdots &\dots &\dots &\ddots &\ddots &\ddots &\vdots \\0&\dots &\dots &\dots &\dots &0&w_s\end{array}\right].\)
|
i
|
40d1db20-ce80-4809-9adb-68be8db97c4a
|
Here the Toeplitz type matrix \(T^{w}\) is induced by the 1-D convolution and is called a convolutional matrix.
The number of parameters \(\lbrace w_k\rbrace _{k=0}^s\) contained in this structured connection matrix is \(s+1\) , much smaller than
the number of entries \(D(D+s)\) of a full connection matrix of the same size. This great reduction at individual layers allows DCNNs to have large depths.
In this paper, we construct a deep neural network consisting of a group of \(J_1 \in \mathbb {Z}_+\) comvolutional layers followed by a downsampling operation, and another group of \(J_2 - J_1\) convolutional layers followed by a fully connected layer. The depth \(J_2\) of the DCNNs and
the width of the last fully connected layer depend on an integer parameter \(N \in \mathbb {N}\) explicitly.
For \(u \ge 0\) , we use \(\lfloor u\rfloor \) to denote the integer part of \(u\) ,
and \(\lceil u\rceil \) the smallest integer greater than or equal to \(u\) .
|
i
|
6246ba85-0d01-42a0-a6dd-ea4a8596d27d
|
Definition 1
Let \(x=(x_1, \ldots , x_d)\in \mathbb {R}^d\) be the input data vector, \(s\in \mathbb {N}\) be the filter length, and \(J_1 \in \mathbb {Z}_+, N \in \mathbb {N}\) . The DCNN \(\lbrace h^{(j)}: \mathbb {R}^d\rightarrow \mathbb {R}^{d_j}\rbrace _{j=1}^{J_2}\) with widths \(\lbrace d_j\rbrace _{j=1}^{J_2}\) given by \(d_0 =d\) ,
\(d_{J_1} = \left\lfloor \frac{d+ J_1 s}{d} \right\rfloor \) and the iteration relation
\( d_j = d_{j-1} + s, \qquad j\in \lbrace 1, \ldots , J_2\rbrace \setminus \lbrace J_1\rbrace \)
|
i
|
d80fa40c-7f9f-4349-bda7-198da10154bb
|
has depth \(J_2 := J_1 + \left\lceil \frac{(2N +3) d_{J_{1}}}{s-1}\right\rceil \) and is defined iteratively by \(h^{(0)}(x)=x\) and
\(h^{(j)}(x)=\left\lbrace \begin{array}{ll}\sigma \left(T^{(j)}h^{(j-1)}(x)-b^{(j)}\right), & \hbox{if} \ j=\lbrace 1, \dots , J_2\rbrace \setminus \lbrace J_1\rbrace , \\\mathfrak {D}_d \left(\sigma \left(T^{(j)}h^{(j-1)}(x)-b^{(j)}\right)\right), & \hbox{if} \ j=J_1,\end{array}\right.\)
|
i
|
41d7a9d7-d68b-4560-97b7-882f89cf3412
|
where \(\lbrace T^{(j)}:=T^{w^{(j)}}\rbrace \) are the convolutional matrices induced by the sequence of filters \({\bf w} :=\lbrace w^{(j)}\rbrace _{j=1}^{J_2}\) each supported in \(\lbrace 0,1,\dots ,s\rbrace \) , \(\mathfrak {D}_d: \mathbb {R}^{d+ J_1 s} \rightarrow \mathbb {R}^{\lfloor \frac{d+ J_1 s}{d} \rfloor }\) is the downsampling operator acting at the \(J_1\) -th layer given by
\( \mathfrak {D}_d (v) = \left(v_{id}\right)_{i=1}^{\lfloor \frac{d+ J_1 s}{d}\rfloor }, \qquad v=\left(v_{i}\right)_{i=1}^{d +J_1 s} \in \mathbb {R}^{d+ J_1 s}, \)
|
i
|
9ef55353-5428-4fd8-b143-cda722f587c2
|
The last layer \(h^{(J_2 +1)}: \mathbb {R}^d\rightarrow \mathbb {R}^{2N +3}\) is produced with a connection matrix \(F^{[J_2 +1]} \in \mathbb {R}^{(2N +3) \times d_{J_2}}\) of identical rows and a bias vector \(b^{(J_2 +1)} \in \mathbb {R}^{2N +3}\) as
\( h^{(J_2 +1)} (x)=\sigma \left(F^{[J_2 +1]}h^{(J_2)}(x)-b^{(J_2 +1)}\right). \)
|
i
|
aa250dfa-96d0-4765-a499-297e79cd94f6
|
The hypothesis space \({\mathcal {H}}_N\) for learning and approximation consists of all output functions depending on \({\bf w}\) , \(F^{[J_2 +1]}\) , and the bias sequence \({\bf b} =\lbrace b^{(j)}\rbrace _{j=1}^{J_2 +1}\) as
\({\mathcal {H}}_N =\left\lbrace c\cdot h^{(J_2 +1)} (x): c\in \mathbb {R}^{2N +3}, \ {\bf w}, \ {\bf b}, \ F^{[J_2 +1]}\right\rbrace .\)
|
i
|
5a80999d-4c3b-4015-9404-51ee2e5799dd
|
The restriction (REF ) on the bias vectors \(\lbrace b^{(j)}\rbrace _{j=1}^{J_2 -1}\) is imposed based on the observation that the sums of the rows in the middle of the convolutional matrix \(T^{w}\) in (REF )
are equal to \(\sum _{k=0}^s w_k\) .
|
i
|
3bfd7f44-7bc8-40f5-a228-589c6a9e32d6
|
then the last layer \(h^{(J_2 +1)} (x)\) of our network can be expressed as
\(\mathcal {A}_{F^{[J_2 +1]}, b^{(J_2 +1)}} \circ \mathcal {A}_{T^{(J_2)}, b^{(J_2)}} \circ \ldots \circ \mathcal {A}_{T^{(J_1 +1)}, b^{(J_1 +1)}} \circ \mathfrak {D}_d \circ \mathcal {A}_{T^{(J_1)}, b^{(J_1)}}\circ \ldots \circ \mathcal {A}_{T^{(1)}, b^{(1)}}(x).\)
|
i
|
1aff4d7d-487e-4d43-832f-d9dc0086325b
|
The structure of the deep neural network in Definition REF is completely determined by the two parameters \(J_1, N\) which are called structural parameters. This network structure does not involve any feature or composite information of the target functions. Once the structural parameters are chosen, the other parameters in (REF ), \({\bf w}, {\bf b}, F^{[J_2 +1]}\) and \(c\) can be trained with
stochastic configuration networks [1]}, stochastic gradient descent, or some other randomized methods, and are called training parameters.
|
i
|
c1c3d50a-75aa-4ced-bca6-6067cafa8e7c
|
Traditional machine learning algorithms are often implemented in two steps of feature extraction and task-oriented
learning. In many practical applications, the first step of feature extraction is carried out with carefully designed preprocessing pipelines and data transformations and is labor intensive, involving feature engineering techniques,
human ingenuity and practical domain knowledge.
|
i
|
fd7918a7-d1d3-4ba5-802d-f768bf1acb0d
|
It has been believed from the great success of deep learning in practical applications that structures imposed on deep neural networks enable deep learning algorithms to combine automatically the two steps of extracting features and producing satisfactory outputs for desired learning tasks. We aim at verifying this belief for the convolutional structure imposed for CNNs in learning composite functions of the form \(f \circ Q\) . On one hand, the structure of our CNN network stated in Definition REF does not depend on the composite information of \(f \circ Q\) or the functions \(f, Q\) ; it is generic and determined only by two parameters \(J_1, N\) . On the other hand, if the target function takes the from \(f \circ Q\) , the convolutions enable our network to extract automatically the polynomial feature \(Q\) and then learn the composite target function well, with tuned parameters \(J_1, N\) of our unified DCNN model. We expect that our CNN network can extract some other nonlinear features and learn functions efficiently via tuning the two structural parameters.
|
i
|
5a50d8ef-ef45-4200-ae2b-0b3402695137
|
To analyze the learning ability of the algorithm induced by our network, we use two novel ideas in estimating the approximation error and sample error. In our previous work [1]}, [2]}, we have shown how to realize linear features \(\lbrace \xi _k \cdot x\rbrace \) by a group of convolutional layers. In this paper we demonstrate how another group of convolutional layers together with a fully connected layer can be used to approximate ridge monomials \(\lbrace (\xi _k \cdot x)^\ell : 1\le k \le n_q, 1\le \ell \le q\rbrace \) and then the polynomial \(Q\) . Here \(n_q =\left(\begin{array}{c} d-1 + q \\ q \end{array}\right)\) is
the dimension of the space of homogeneous polynomials on \(\mathbb {R}^d\) of degree \(q\) , and \(J_1 = \lceil \frac{n_q d-1}{s-1}\rceil \) is the number of convolutional layers in the first group.
Applying convolutional layers to extracting nonlinear (polynomial) features is the first novelty of this paper.
The second novelty is to bound the training parameters \(c, {\bf w}, {\bf b}, F^{[J_2 +1]}\) in the expression (REF ) of the approximator in the approximation error part so that a bounded subset \({\mathcal {H}}_{R, N}\) of \({\mathcal {H}}_N\) (defined in (REF ) below) contains the approximator and the covering numbers of the bounded hypothesis space \({\mathcal {H}}_{R, N}\) can be estimated for bounding the sample error. This is achieved by applying Cauchy' bound of polynomial roots and Vieta's formula of
polynomial coefficients to bounding the filters constructed in convolutional factorizations of sequences.
|
i
|
d50d7320-f444-4f12-81ef-c24bfb198d9f
|
In this section we state our main results which will be proved in Sections , , and REF . The approximation theorems given in the first two subsections show that if the target function has the composite form \(f \circ Q\) with a Lipschitz-\(\alpha \) function \(f\) , then the CNN network in Definition REF
achieves an approximation accuracy \(\epsilon >0\) when the structural parameter \(N\) is of order \(O\left(\epsilon ^{-\frac{1}{\alpha }}\right)\) , a level for approximating univariate functions by neural networks. The learning rates stated in our last main result realized by the learning algorithm induced by our network for regression are of dimension-independent order \(O\left(m^{-\frac{\alpha }{1+\alpha }}\right)\) for a sample of size \(m\) . These results tell us that the generic CNN network in Definition REF has the ability of automatically extracting the polynomial feature and making use of the composite nature of the target function via tuning the structural parameters \(N, J_1\) in the learning process, even though the network does not involve any information about the polynomial or composition.
|
r
|
e51cb1ba-3cf8-4392-8e61-9a62de7cddcb
|
In [1]}, A. Arageorgis, building on the work of [2]}, constructs a formal framework describing scientific theorising in the presence of many variants of relativism: truth, meaning, logical framework, and evidence are expected to depend on the actions, conjectures, and conceptual choices of each scientist (or research programme) at any given time. Arageorgis proves that, if two scientists (or research programmes) start with common background knowledge and work in a way that asymptotically brings each of them close to their respective truth about the world, then there will asymptotically be a (trivial) translation between their resulting theories.
|
i
|
893afce5-87ad-4996-a89e-7299eadb0c79
|
It has been argued [1]} that software development has an empirical character, albeit with some important differences compared to the empirical character of natural science. Moreover, computer systems can be considered as technical artefacts [2]}; in that sense, they require the use of some background scientific knowledge in order to be built [3]}. Both these considerations lead us to believe that the framework defined in [4]} can be (carefully) modified to reason fruitfully about program development; in particular, given its translatability result and, more generally, its strength in modelling situations where multiple agents work independently but share some background assumptions, we feel that it can shed some light on similar concerns in computer science.
|
i
|
6f6a836a-9d78-470f-a93c-15f2bd77f15a
|
With the above in mind, we extend the framework of [1]} to describe software development. We reinterpret scientists (or research programmes) as software developers (or teams). The notions of relativised truth, meaning, logical framework, and evidence remain present in our generalisation. We add that each developer (or team) produces a piece of code (a technical artefact) at every time instant; hence, the relativised setting is extended with program semantics, allowing different programmers to work in different programming languages, or even programming paradigms, resulting in different program semantics.
|
i
|
c6434683-6d3e-4477-a0c9-6863f374d096
|
It is then possible to define a notion of translatability between the outputs of program generators.
As in [1]}, we manage to obtain a proof that if two programmers (or teams) start with common specification and work in a way that asymptotically makes each one write correct programs relative to their shared specification, then, there will asymptotically be a (trivial) translation between their resulting programs and theories.
|
i
|
c6333273-1bb4-403c-a392-0928ef6dd98d
|
The rest of the paper is organised as follows: Section contains philosophical discussion on the ontology of programs and specifications and on the methodology of software development; the purpose of this discussion is threefold: (a) to argue on the feasibility of porting results form the philosophy of natural science to reason about computer science, (b) to support the conceptual choices made while adapting [1]} and developing our framework, and (c) to provide an overview of the aspects of the development of computational systems, especially in relation to programs and specifications, from which the aspects that can or cannot be handled by our framework can be inferred.
Section contains a detailed presentation of our framework.
Finally, Section contains some concluding remarks.
|
i
|
b01a8d40-15e1-4adf-bdef-7e7dec529617
|
Similar to [1]}, the central elements of the framework on its technical level (corresponding to the central elements of specifications and software developers on the conceptual level) are possible worlds and program generators and we are studying their interaction. At each time instance, a program generator processes the evidence up to the given time and produces a program, a hypothesis, and an action. The possible world responds with a truth assignment, program semantics, and some new evidence; the new evidence can be appended to the existing and be fed into the program generator at the next time instance, continuing the interaction.
|
d
|
c175610e-ade5-42a0-af4c-0928e48fae7e
|
Using these elements above, we have defined an abstract notion of translatability between the outputs of programmers working in different settings. As a token of the importance of such a notion, we have shown, similarly to [1]}, that two program generators starting with common specification (i.e., set of possible worlds) and writing correct programs relative to that specification will end up in programs one of which can be (trivially) translated into the other. Specific instantiations of the framework can refine the definition of translatability we have provided and thus, by adapting our proof accordingly, arrive at translations with more specific structure.
|
d
|
1b5ba070-8f21-4946-bf99-84e78f40b7bf
|
The fact that we could adapt a framework targetted at describing natural science and end up with a framework that describes software development and still highly resembles its origin hints on the similarities between the two endeavours. However, notice that, in addition to introducing new elements to account for the technical artefacts that are produced by programmers, we had to reinterpret some terms of the original framework in order to adapt it for our purposes, most notably background knowledge was reinterpreted as specification; this is a hint of (one of) the key differences between the two endeavours.
|
d
|
761c4acc-826e-437b-99a3-cbd6339f28b4
|
Of course, as already stated in Section , our framework does not handle all the aspects of software development, let alone of the development of full computational systems. We have commented on a few of the ways that it might be extended to accommodate more such aspects. In addition, the reasoning on logics of program generators might benefit if it is described via the theory of institutions, an abstraction of model theory based on category theory; [1]} has already attempted such a modification of the original framework of [2]}. Moreover, other kinds of mappings between the outputs of program generators could be considered instead of translations, such as conceptual blending [3]}; this might model the creative process of merging interesting ideas from one program to the other while the programs are still expressed in different formalisms.
|
d
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.