Input Fast-Forwarding for Better Deep Learning. Ibrahim, A., Abbott, A. L., & Hussein, M. E. In Karray, F., Campilho, A., & Cheriet, F., editors, Image Analysis and Recognition, of Lecture Notes in Computer Science, pages 363–370, 2017. Springer International Publishing. abstract bibtex This paper introduces a new architectural framework, known as input fast-forwarding, that can enhance the performance of deep networks. The main idea is to incorporate a parallel path that sends representations of input values forward to deeper network layers. This scheme is substantially different from “deep supervision”, in which the loss layer is re-introduced to earlier layers. The parallel path provided by fast-forwarding enhances the training process in two ways. First, it enables the individual layers to combine higher-level information (from the standard processing path) with lower-level information (from the fast-forward path). Second, this new architecture reduces the problem of vanishing gradients substantially because the fast-forwarding path provides a shorter route for gradient backpropagation. In order to evaluate the utility of the proposed technique, a Fast-Forward Network (FFNet), with 20 convolutional layers along with parallel fast-forward paths, has been created and tested. The paper presents empirical results that demonstrate improved learning capacity of FFNet due to fast-forwarding, as compared to GoogLeNet (with deep supervision) and CaffeNet, which are 4×4×4\\times \ and 18×18×18\\times \ larger in size, respectively. All of the source code and deep learning models described in this paper will be made available to the entire research community (https://github.com/aicentral/FFNet).
@inproceedings{ibrahim_input_2017,
title = {Input Fast-Forwarding for Better Deep Learning},
rights = {All rights reserved},
isbn = {978-3-319-59876-5},
series = {Lecture Notes in Computer Science},
abstract = {This paper introduces a new architectural framework, known as input fast-forwarding, that can enhance the performance of deep networks. The main idea is to incorporate a parallel path that sends representations of input values forward to deeper network layers. This scheme is substantially different from “deep supervision”, in which the loss layer is re-introduced to earlier layers. The parallel path provided by fast-forwarding enhances the training process in two ways. First, it enables the individual layers to combine higher-level information (from the standard processing path) with lower-level information (from the fast-forward path). Second, this new architecture reduces the problem of vanishing gradients substantially because the fast-forwarding path provides a shorter route for gradient backpropagation. In order to evaluate the utility of the proposed technique, a Fast-Forward Network ({FFNet}), with 20 convolutional layers along with parallel fast-forward paths, has been created and tested. The paper presents empirical results that demonstrate improved learning capacity of {FFNet} due to fast-forwarding, as compared to {GoogLeNet} (with deep supervision) and {CaffeNet}, which are 4×4×4\{{\textbackslash}times \} and 18×18×18\{{\textbackslash}times \} larger in size, respectively. All of the source code and deep learning models described in this paper will be made available to the entire research community (https://github.com/aicentral/{FFNet}).},
pages = {363--370},
booktitle = {Image Analysis and Recognition},
publisher = {Springer International Publishing},
author = {Ibrahim, Ahmed and Abbott, A. Lynn and Hussein, Mohamed E.},
editor = {Karray, Fakhri and Campilho, Aurélio and Cheriet, Farida},
year = {2017},
langid = {english},
keywords = {Deep Branch, Deep Learning, Early Layer, Parallel Path, Text Detection}
}
Downloads: 0
{"_id":"uZqG2hid3hspoiM2X","bibbaseid":"ibrahim-abbott-hussein-inputfastforwardingforbetterdeeplearning-2017","author_short":["Ibrahim, A.","Abbott, A. L.","Hussein, M. E."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Input Fast-Forwarding for Better Deep Learning","rights":"All rights reserved","isbn":"978-3-319-59876-5","series":"Lecture Notes in Computer Science","abstract":"This paper introduces a new architectural framework, known as input fast-forwarding, that can enhance the performance of deep networks. The main idea is to incorporate a parallel path that sends representations of input values forward to deeper network layers. This scheme is substantially different from “deep supervision”, in which the loss layer is re-introduced to earlier layers. The parallel path provided by fast-forwarding enhances the training process in two ways. First, it enables the individual layers to combine higher-level information (from the standard processing path) with lower-level information (from the fast-forward path). Second, this new architecture reduces the problem of vanishing gradients substantially because the fast-forwarding path provides a shorter route for gradient backpropagation. In order to evaluate the utility of the proposed technique, a Fast-Forward Network (FFNet), with 20 convolutional layers along with parallel fast-forward paths, has been created and tested. The paper presents empirical results that demonstrate improved learning capacity of FFNet due to fast-forwarding, as compared to GoogLeNet (with deep supervision) and CaffeNet, which are 4×4×4\\\\times \\ and 18×18×18\\\\times \\ larger in size, respectively. All of the source code and deep learning models described in this paper will be made available to the entire research community (https://github.com/aicentral/FFNet).","pages":"363–370","booktitle":"Image Analysis and Recognition","publisher":"Springer International Publishing","author":[{"propositions":[],"lastnames":["Ibrahim"],"firstnames":["Ahmed"],"suffixes":[]},{"propositions":[],"lastnames":["Abbott"],"firstnames":["A.","Lynn"],"suffixes":[]},{"propositions":[],"lastnames":["Hussein"],"firstnames":["Mohamed","E."],"suffixes":[]}],"editor":[{"propositions":[],"lastnames":["Karray"],"firstnames":["Fakhri"],"suffixes":[]},{"propositions":[],"lastnames":["Campilho"],"firstnames":["Aurélio"],"suffixes":[]},{"propositions":[],"lastnames":["Cheriet"],"firstnames":["Farida"],"suffixes":[]}],"year":"2017","langid":"english","keywords":"Deep Branch, Deep Learning, Early Layer, Parallel Path, Text Detection","bibtex":"@inproceedings{ibrahim_input_2017,\n\ttitle = {Input Fast-Forwarding for Better Deep Learning},\n\trights = {All rights reserved},\n\tisbn = {978-3-319-59876-5},\n\tseries = {Lecture Notes in Computer Science},\n\tabstract = {This paper introduces a new architectural framework, known as input fast-forwarding, that can enhance the performance of deep networks. The main idea is to incorporate a parallel path that sends representations of input values forward to deeper network layers. This scheme is substantially different from “deep supervision”, in which the loss layer is re-introduced to earlier layers. The parallel path provided by fast-forwarding enhances the training process in two ways. First, it enables the individual layers to combine higher-level information (from the standard processing path) with lower-level information (from the fast-forward path). Second, this new architecture reduces the problem of vanishing gradients substantially because the fast-forwarding path provides a shorter route for gradient backpropagation. In order to evaluate the utility of the proposed technique, a Fast-Forward Network ({FFNet}), with 20 convolutional layers along with parallel fast-forward paths, has been created and tested. The paper presents empirical results that demonstrate improved learning capacity of {FFNet} due to fast-forwarding, as compared to {GoogLeNet} (with deep supervision) and {CaffeNet}, which are 4×4×4\\{{\\textbackslash}times \\} and 18×18×18\\{{\\textbackslash}times \\} larger in size, respectively. All of the source code and deep learning models described in this paper will be made available to the entire research community (https://github.com/aicentral/{FFNet}).},\n\tpages = {363--370},\n\tbooktitle = {Image Analysis and Recognition},\n\tpublisher = {Springer International Publishing},\n\tauthor = {Ibrahim, Ahmed and Abbott, A. Lynn and Hussein, Mohamed E.},\n\teditor = {Karray, Fakhri and Campilho, Aurélio and Cheriet, Farida},\n\tyear = {2017},\n\tlangid = {english},\n\tkeywords = {Deep Branch, Deep Learning, Early Layer, Parallel Path, Text Detection}\n}\n\n","author_short":["Ibrahim, A.","Abbott, A. L.","Hussein, M. E."],"editor_short":["Karray, F.","Campilho, A.","Cheriet, F."],"bibbaseid":"ibrahim-abbott-hussein-inputfastforwardingforbetterdeeplearning-2017","role":"author","urls":{},"keyword":["Deep Branch","Deep Learning","Early Layer","Parallel Path","Text Detection"],"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://bibbase.org/f/2bJzYjCLapWTtM86s/mehussein-2023.bib","dataSources":["kYvtZ54PgkXqjbteW","dWqYiMkhjrrw3PpB5","jrmreZGeQB4kJdEkY","mhdykGczo2jDicE3X","havAjNnaG4BxhYWyb"],"keywords":["deep branch","deep learning","early layer","parallel path","text detection"],"search_terms":["input","fast","forwarding","better","deep","learning","ibrahim","abbott","hussein"],"title":"Input Fast-Forwarding for Better Deep Learning","year":2017}