Deep Neural Networks: Transferability of Features Research Proposal

Verified

Added on  2020/04/13

|4
|539
|54
Project
AI Summary
This document presents a pre-research proposal focused on the transferability of features in deep neural networks. The proposal addresses the observation that deep neural networks trained on images often exhibit similar first-layer features, such as Gabor filters, regardless of the specific dataset or training objective. The research aims to quantify the specificity or generality of network layers, determine if the transition from general to specific features is gradual or abrupt, and identify where this transition occurs within the network. The proposal includes a detailed timeline outlining the various stages of the research, from evaluating research topics and writing the proposal, to designing the methodology, collecting and analyzing data, and finally, writing the research paper. The proposal references relevant literature, including works by Joshi (2017) and Singh et al. (2015), to support the research's foundation and objectives. The goal is to contribute to a better understanding of how features are learned and transferred in deep learning models, with implications for network design and optimization.
Document Page
PRE-RESEARCH PROPOSAL: TRANSFERABILITY OF FEATURES IN DEEP NEURAL
NETWORKS 1
Pre-Research Proposal: Transferability of Features in Deep Neural
Networks
Name
Date
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
PRE-RESEARCH PROPOSAL: TRANSFERABILITY OF FEATURES IN DEEP NEURAL
NETWORKS 2
Problem Definition
Deep neural networks in the modern world exhibit a curios phenomenon in that when
trained with images, they have a tendency to all learn first layer features that are similar to color
blobs or Gabor filters. These filters appear so commonly that if anything else is obtained in natural
image datasets, the result is a suspicion that the hyper parameters chosen was done poorly or there
is a bug in the software. This phenomenon is seen in different datasets as well as where the training
objectives are very different including in situations of supervised image classification, unsupervised
sparse representations learning, and unsupervised density learning. Regardless of the natural dataset
and the specific cost function, the standard features in first layer systems seem to occur and so
these features (first features) are considered general. Further, last layer trained network computed
features must greatly depend on the chosen task and dataset; the last layer features are thus termed
specific (Singh et al., 2015). Given that the first layers are general while last layers are specific,
then within the network, there must be a point of transition from general to specific (Joshi, 2017).
With this in mind, this pre-research proposal has the following objectives;
Objectives
To quantify the degree to which a specific layer is specific or general
To establish whether the transition from general to specific occur suddenly at a singe layer
or whether it occurs spread out out in over many layers
To establish where the transition occurs; whether it is near the first, the middle, or the last
layer in the network
Time Table
Task Duration/ Time
Evaluating research topics and identifying
suitable research area
Three Days (Nov 25 2017 to Nov 28 2017)
Writing preproposal One day (Nov 29 2017)
Pre research data and materials collection One Week
Writing formal research proposal One Week
Getting professor feedback and making
necessary adjustments
Two Weeks
Designing research methodology Three Days
Document Page
PRE-RESEARCH PROPOSAL: TRANSFERABILITY OF FEATURES IN DEEP NEURAL
NETWORKS 3
Collecting materials for the research One Week
Literature Reviews Two Weeks
Designing experimental setup One Week
Data Collection One Week
Data analysis One Week
Discussion of research findings Four Days
Making Draft Research Five days
Obtaining professor feedback Two weeks
Making adjustments and writing final
research paper with conclusions and
recommendations
Two weeks
Presenting research One day
Document Page
PRE-RESEARCH PROPOSAL: TRANSFERABILITY OF FEATURES IN DEEP NEURAL
NETWORKS 4
References
Joshi, N. (2017). Combinational neural network using Gabor filters for the classification of
handwritten digits (pp. 1-4). Frankfurt: Frankfurt Institute for Advanced Studie. Retrieved
from https://arxiv.org/pdf/1709.05867.pdf
Singh, B., De, S., Zhang, Y., Goldstein, T., & Taylor, G., & 2015 (December 01, 2015). Layer-
Specific Adaptive Learning Rates for Deep Networks. IEEE 14th International Conference
on Machine Learning and Applications (ICMLA). 364-368.
chevron_up_icon
1 out of 4
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]