Home ? Publications ?

This page is designed to help the researchers to repeat the experiments of the following publication:

K. Ghiasi-Shirazi. Generalizing the Convolution Operator in Convolutional Neural Networks. Neural Processing Letters, Volume 50, Issue 3, pp 2627–2646, December 2019. (link)

Software

The code of this paper has evolved during three years from Jan 2016 to Jan 2019.
During this period of time, both Caffe and our implementation changed.
Specifically, the original implementation of Weighted L2 Convolution layer lacks the 1/2 coefficient which was added when I discovered the L2 family of generalized convolution operators.
Therefore, for some experiments, you might get slightly different results from those reported.
On the other hand, considering the fact that we have fixed random seeds, the results of many other experiments would be completely reproducible.
The original implementation of Weighted L2 Convolution layers was too slow.
Here, I have included the efficient code implemented by my former MSc student, Keivan Nalaie (email: nalaiek at mcmaster.ca), which I further modified and generalized to L2 family of generalized convolution operators. This implementation is documented in the following paper:

Keivan Nalaie, Kamaledin Ghiasi-Shirazi, Modhammad-R. Akbarzadeh-T. Efficient implementation of a generalized convolutional neural networks based on weighted Euclidean distance. In Computer and Knowledge Engineering (ICCKE), 2017 7th International Conference on, pp. 211-216, IEEE. 

This implementation is based on the Caffe framework. You can download our package (Caffe+GCNN) which contains both Caffe (not the latest version) and our implementation. This package contains Caffe along with the following new layers: