Multi-Scale Context Aggregation by Dilated Convolutions. Yu, F. & Koltun, V. ArXiv e-prints, November, 2015.
Multi-Scale Context Aggregation by Dilated Convolutions [link]Paper  bibtex   
@article{Yu:2015uc,
author = {Yu, Fisher and Koltun, Vladlen},
title = {{Multi-Scale Context Aggregation by Dilated Convolutions}},
journal = {ArXiv e-prints},
year = {2015},
volume = {cs.CV},
month = nov,
annote = {simple pipeline beats complex ones, beating all previous results.

Math of dilated conv.

You can find it at <http://www.inference.vc/dilated-convolutions-and-kronecker-factorisation/>.

Or you can simply check Eqs. (1) and (2)

Here I believe they assume standard stride=1 convolution.

In (1), s=p-t,
In (2) s=p-lt. That's why we have (l-1) zeros inserted between filter weights if we convert a dilated conv into a normal conv.

You can also check <https://www.tensorflow.org/api_docs/python/tf/nn/atrous_conv2d>

> This is equivalent to convolving the input with a set of upsampled filters, produced by inserting rate - 1 zeros between two consecutive values of the filters along the height and width dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English).

In Section 3, their initialization is essentially identity matrix (Eq. 4) or generalized identity matrix (Eq. 5). In Eq. 5, for other entries, noise is introduced. I believe this is also the case for Eq. 4.


},
keywords = {context, deep learning},
read = {Yes},
rating = {4},
date-added = {2017-04-25T16:41:26GMT},
date-modified = {2017-04-25T17:15:53GMT},
url = {http://arxiv.org/abs/1511.07122},
local-url = {file://localhost/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2015/Yu/arXiv%202015%20Yu.pdf},
file = {{arXiv 2015 Yu.pdf:/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2015/Yu/arXiv 2015 Yu.pdf:application/pdf}},
uri = {\url{papers3://publication/uuid/0CAFD67C-1F39-4395-AFCB-59D16953E26E}}
}

Downloads: 0