ENGR7019 (Autumn 2022) Specialised Software Auditory System Project
VerifiedAdded on 2023/01/17
|17
|2280
|20
Project
AI Summary
This report presents a MATLAB-based project focused on modeling the human auditory system, specifically for the ENGR7019 course at Western Sydney University. The assignment involves the implementation of an auditory system model, mimicking the functionality of the Cochlea, using Gammatone filters and the creation of auditory spectrograms to analyze sound signals. The methodology includes utilizing MATLAB functions such as 'gammatonegram' and 'audioread' to process sound signals, convert them into two-dimensional frequency representations, and generate Short-Time Fourier Transform (STFT) spectrograms. The project addresses various tasks, including adjusting filter parameters, analyzing frequency responses, and plotting sound signals to visualize the auditory system's behavior. The results demonstrate the successful application of these MATLAB functions to simulate aspects of human auditory processing, providing insights into sound intensity, pitch, and timbre, and ultimately validating the model's effectiveness in representing the human auditory system.

WESTERN SYDNEY
UNIVERSITY
ENGR7019 (Autumn 2022) Specialised
Software Applications
Assignment-1
1
UNIVERSITY
ENGR7019 (Autumn 2022) Specialised
Software Applications
Assignment-1
1
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

TABLE OF CONTENT
1 INTRODUCTION.........................................................................................................4
2 METHODLOGY...........................................................................................................5
3 DESIGNING AN AUDITORY SYSTEM ON MATLAB...........................................7
3.1 TASK 1..................................................................................................................8
3.1.1 PART A............................................................................................................8
3.1.2 PART B............................................................................................................9
3.2 TASK 2................................................................................................................12
3.3 Task 3...................................................................................................................12
3.3.1 RESULTS......................................................................................................12
3.4 TASK 4................................................................................................................13
3.4.1 PART A..........................................................................................................13
3.4.2 PART B..........................................................................................................14
4 CONCLUSION...........................................................................................................16
5 REFERENCES............................................................................................................17
2
1 INTRODUCTION.........................................................................................................4
2 METHODLOGY...........................................................................................................5
3 DESIGNING AN AUDITORY SYSTEM ON MATLAB...........................................7
3.1 TASK 1..................................................................................................................8
3.1.1 PART A............................................................................................................8
3.1.2 PART B............................................................................................................9
3.2 TASK 2................................................................................................................12
3.3 Task 3...................................................................................................................12
3.3.1 RESULTS......................................................................................................12
3.4 TASK 4................................................................................................................13
3.4.1 PART A..........................................................................................................13
3.4.2 PART B..........................................................................................................14
4 CONCLUSION...........................................................................................................16
5 REFERENCES............................................................................................................17
2

LIST OF FIGURES:
Figure 1: Components of the Auditory Model...........................................................................................................
Figure 2: Changing default values in gammatonegram function...............................................................................
Figure 3: New window size of auditory spectrogram................................................................................................
Figure 4: New window size of auditory spectrogram-STFT......................................................................................
Figure 5: Changed Lower Center Frequency...........................................................................................................
Figure 6: Lowest center Frequency response of every channel................................................................................
Figure 7: Lowest center Frequency response of 5th channel...................................................................................
Figure 8: gammanote filter lowest center frequency range......................................................................................
Figure 9: gammatone filter lowest center frequency range......................................................................................
Figure 10: Time vector............................................................................................................................................
Figure 11: Code for Plotting Sound Signal..............................................................................................................
Figure 12: Plotting sound signal..............................................................................................................................
Figure 13: Code for auditory spectrogram...............................................................................................................
Figure 14: Coding For fast STFT Spectrogram.......................................................................................................
Figure 15: Gammatone Spectrogram.......................................................................................................................
Figure 16: STFT Spectrogram.................................................................................................................................
3
Figure 1: Components of the Auditory Model...........................................................................................................
Figure 2: Changing default values in gammatonegram function...............................................................................
Figure 3: New window size of auditory spectrogram................................................................................................
Figure 4: New window size of auditory spectrogram-STFT......................................................................................
Figure 5: Changed Lower Center Frequency...........................................................................................................
Figure 6: Lowest center Frequency response of every channel................................................................................
Figure 7: Lowest center Frequency response of 5th channel...................................................................................
Figure 8: gammanote filter lowest center frequency range......................................................................................
Figure 9: gammatone filter lowest center frequency range......................................................................................
Figure 10: Time vector............................................................................................................................................
Figure 11: Code for Plotting Sound Signal..............................................................................................................
Figure 12: Plotting sound signal..............................................................................................................................
Figure 13: Code for auditory spectrogram...............................................................................................................
Figure 14: Coding For fast STFT Spectrogram.......................................................................................................
Figure 15: Gammatone Spectrogram.......................................................................................................................
Figure 16: STFT Spectrogram.................................................................................................................................
3
You're viewing a preview
Unlock full access by subscribing today!

1 INTRODUCTION
The sound signal is processed into important information through a complex arrangement
of numerous components of the Auditory System of humans, which consists of two parts, the
central and the peripheral parts (Kumar et al., 2016). The outer, inner and middle ears
collectively define the peripheral part of the auditory system of humans. The peripheral auditory
system allows the transmission of sound impulses from the outer ear to the auditory neurons
surrounding the ear and then to the inner ear. The sound impulse entering the ear is one-
dimensional and needs to be converted into two-dimensional sound to be processed by the
auditory system. Inside the inner ear, a converter, Cochlea, is presently responsible for
converting sound impulse into a two-dimensional sound that travels deep into the inner part of
the ear. The brain contains the roots of the pathways of the auditory system where sound is
perceived at the cortex. The cortex defines the primary aspects of sound signals like the pitch and
timbre of the sound and the frequency that is important for speech recognition in the human
auditory system (P. Key, 2016).
Human Auditory System is deeply researched to develop a sufficient understanding of its
working processes, mechanism, and complexity. Several remedies for the ineffective functioning
of the human auditory system are addressed through high-tech equipment that aids human
hearing ability. In December 2019, 736,900 cochlear devices were used worldwide, and about
183,000 people, including adults and children, received cochlear implants, according to a report
presented by the U.S Food Drug Administration (FDA). Cochlear implants are considerably
different from hearing aid equipment; the former directly contacts and stimulates the nerves of
the auditory system while bypassing the damaged and inaccessible part, while the latter amplifies
the impulses of sound to provide ease in processing by damaged ear parts (Naples and
Ruckenstein, 2020). Cochlear implants are highly compatible with the working of the human
auditory system. The hearing aid equipment and the cochlear implants are delicate and provide
remedial solutions for impaired hearing abilities (Dombrowski, Rankovic and Moser, 2018).
4
The sound signal is processed into important information through a complex arrangement
of numerous components of the Auditory System of humans, which consists of two parts, the
central and the peripheral parts (Kumar et al., 2016). The outer, inner and middle ears
collectively define the peripheral part of the auditory system of humans. The peripheral auditory
system allows the transmission of sound impulses from the outer ear to the auditory neurons
surrounding the ear and then to the inner ear. The sound impulse entering the ear is one-
dimensional and needs to be converted into two-dimensional sound to be processed by the
auditory system. Inside the inner ear, a converter, Cochlea, is presently responsible for
converting sound impulse into a two-dimensional sound that travels deep into the inner part of
the ear. The brain contains the roots of the pathways of the auditory system where sound is
perceived at the cortex. The cortex defines the primary aspects of sound signals like the pitch and
timbre of the sound and the frequency that is important for speech recognition in the human
auditory system (P. Key, 2016).
Human Auditory System is deeply researched to develop a sufficient understanding of its
working processes, mechanism, and complexity. Several remedies for the ineffective functioning
of the human auditory system are addressed through high-tech equipment that aids human
hearing ability. In December 2019, 736,900 cochlear devices were used worldwide, and about
183,000 people, including adults and children, received cochlear implants, according to a report
presented by the U.S Food Drug Administration (FDA). Cochlear implants are considerably
different from hearing aid equipment; the former directly contacts and stimulates the nerves of
the auditory system while bypassing the damaged and inaccessible part, while the latter amplifies
the impulses of sound to provide ease in processing by damaged ear parts (Naples and
Ruckenstein, 2020). Cochlear implants are highly compatible with the working of the human
auditory system. The hearing aid equipment and the cochlear implants are delicate and provide
remedial solutions for impaired hearing abilities (Dombrowski, Rankovic and Moser, 2018).
4
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

2 METHODOLOGY
Advancements in the technologies related to medical sciences provide many opportunities to
develop artificial devices that integrate with the working mechanisms of the human auditory
system. To understand the auditory system of humans and the signal processing occurring in the
human ear, the presented assignment aims to devise a model of an auditory system on MATLAB
that resembles the human auditory system and the working of Cochlea. To define the report's
purpose, several versions of codes were assessed to see the compatibility of codes with the
testing audio signal. Gammatone and Gammatonegram are the two primary functions used in this
assignment. The project revolves around converting the one-dimensional sound signal into a
two-dimensional frequency signal, and this conversion is known as an Auditory spectrogram
(Decorsiere, Sondergaard, MacDonald and Dau, 2014). The purpose of obtaining a Spectrogram
is to provide vital information regarding the intensity, direction, timbre and pitch of the sound
signal. The data from the auditory system is extracted using numerous methods. This extracted
data assists in working and studying different auditory instruments and aiding tools such as a
cochlear implant, hearing aid equipment, speech and music retrieval systems, and others.
Figure 1 shows three. The generated sound impulse, the Cochlear Filterbank for filtering
components of sound, and the resulting Auditory Spectrogram of the system are the three major
components of the designed Auditory Model. The Auditory model has a detailed mechanism for
its operation while processing sound. Figure 1 shows components of the auditory system.
5
Advancements in the technologies related to medical sciences provide many opportunities to
develop artificial devices that integrate with the working mechanisms of the human auditory
system. To understand the auditory system of humans and the signal processing occurring in the
human ear, the presented assignment aims to devise a model of an auditory system on MATLAB
that resembles the human auditory system and the working of Cochlea. To define the report's
purpose, several versions of codes were assessed to see the compatibility of codes with the
testing audio signal. Gammatone and Gammatonegram are the two primary functions used in this
assignment. The project revolves around converting the one-dimensional sound signal into a
two-dimensional frequency signal, and this conversion is known as an Auditory spectrogram
(Decorsiere, Sondergaard, MacDonald and Dau, 2014). The purpose of obtaining a Spectrogram
is to provide vital information regarding the intensity, direction, timbre and pitch of the sound
signal. The data from the auditory system is extracted using numerous methods. This extracted
data assists in working and studying different auditory instruments and aiding tools such as a
cochlear implant, hearing aid equipment, speech and music retrieval systems, and others.
Figure 1 shows three. The generated sound impulse, the Cochlear Filterbank for filtering
components of sound, and the resulting Auditory Spectrogram of the system are the three major
components of the designed Auditory Model. The Auditory model has a detailed mechanism for
its operation while processing sound. Figure 1 shows components of the auditory system.
5

Figure 1: Components of the Auditory Model
The sound impulse is always one-dimensional. The Cochlear filterbank provides a pathway to
the incoming signal and contains numerous Gammatone filters arranged parallel and cascadingly.
It is responsible for converting the incoming one-dimensional into two-dimension to generate the
Auditory Spectrogram of the received signal. The frequency and the bandwidth of the
Gammatone filters possess a direct relation, i.e., the frequency grows with the increasing
bandwidth filter. The proportional and linear calibration of Gammatone filters requires a high-
order center frequency filter which provides a higher bandwidth of frequency (Khalighinejad,
Herrero, Mehta and Mesgarani, 2019). The Gammatone filter is a bandpass filter calibrated for
the gammatone filter's center frequency as it only permits a single frequency to pass at a time.
The center frequency is the same as the frequency of the output signal produced by the filter. The
pattern of the oscillating sound signal is the same as the central frequency. The frequency of the
oscillating signal is almost the same as the center frequency of the gammatone filter (Verhulst,
Altoè and Vasilkov, 2018).
Additionally, the filters' variable calibration permits them to provide several output signals
within the frequency spectrum of the provided input signal. The positive values of the output
signal are retained, and the negative values are rounded off to zero using half-wave rectification.
The rectification of the signal eliminates negative values and converts the data into a two-
dimensional spectrogram that provides a straightforward display of the results.
6
The sound impulse is always one-dimensional. The Cochlear filterbank provides a pathway to
the incoming signal and contains numerous Gammatone filters arranged parallel and cascadingly.
It is responsible for converting the incoming one-dimensional into two-dimension to generate the
Auditory Spectrogram of the received signal. The frequency and the bandwidth of the
Gammatone filters possess a direct relation, i.e., the frequency grows with the increasing
bandwidth filter. The proportional and linear calibration of Gammatone filters requires a high-
order center frequency filter which provides a higher bandwidth of frequency (Khalighinejad,
Herrero, Mehta and Mesgarani, 2019). The Gammatone filter is a bandpass filter calibrated for
the gammatone filter's center frequency as it only permits a single frequency to pass at a time.
The center frequency is the same as the frequency of the output signal produced by the filter. The
pattern of the oscillating sound signal is the same as the central frequency. The frequency of the
oscillating signal is almost the same as the center frequency of the gammatone filter (Verhulst,
Altoè and Vasilkov, 2018).
Additionally, the filters' variable calibration permits them to provide several output signals
within the frequency spectrum of the provided input signal. The positive values of the output
signal are retained, and the negative values are rounded off to zero using half-wave rectification.
The rectification of the signal eliminates negative values and converts the data into a two-
dimensional spectrogram that provides a straightforward display of the results.
6
You're viewing a preview
Unlock full access by subscribing today!

3 DESIGNING AN AUDITORY SYSTEM ON MATLAB
Pre-Conditions:
Student ID-20053983
The table in the task file assigns pre-conditions for the Gammatone filter and the number of
channels. The right-most digit of the student number defines the lowest center frequency and the
number of channels of the gammatone frequency. Therefore, the gammatone filter with the
lowest center frequency from the table is 90 Hz, while the corresponding number of channels of
the gammatone filters is 96.
7
Pre-Conditions:
Student ID-20053983
The table in the task file assigns pre-conditions for the Gammatone filter and the number of
channels. The right-most digit of the student number defines the lowest center frequency and the
number of channels of the gammatone frequency. Therefore, the gammatone filter with the
lowest center frequency from the table is 90 Hz, while the corresponding number of channels of
the gammatone filters is 96.
7
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

3.1 TASK 1
3.1.1 PART A
3.1.1.1 RESULTS
The number of channels according to student number specification is 96. Therefore, the height of
both the auditory spectrogram and the Short-time Fourier Transformation (STFT) spectrogram is
96. It is noted that the 'gammatonegram' function is an accurate method that can define the
characteristics of the Auditory Spectrogram. Gammatone function is called by entering the
'number of channels' argument in the default function. Figure 2 shows codes with changed
default values, while figures 3 and 4 show the changed window size of the STFT spectrogram by
altering the default values of the gammatonegram function. The lowest center frequency of the
gammatone filter is defined by FMIN, while the number of channels is denoted by 'N'. Here the
number of channels and FMIN values are 96 and 90, respectively.
Figure 2: Changing default values in the gammatonegram function
8
3.1.1 PART A
3.1.1.1 RESULTS
The number of channels according to student number specification is 96. Therefore, the height of
both the auditory spectrogram and the Short-time Fourier Transformation (STFT) spectrogram is
96. It is noted that the 'gammatonegram' function is an accurate method that can define the
characteristics of the Auditory Spectrogram. Gammatone function is called by entering the
'number of channels' argument in the default function. Figure 2 shows codes with changed
default values, while figures 3 and 4 show the changed window size of the STFT spectrogram by
altering the default values of the gammatonegram function. The lowest center frequency of the
gammatone filter is defined by FMIN, while the number of channels is denoted by 'N'. Here the
number of channels and FMIN values are 96 and 90, respectively.
Figure 2: Changing default values in the gammatonegram function
8

Figure 3: New window size of the auditory spectrogram
Figure 4: New window size of auditory spectrogram-STFT
3.1.2 PART B
3.1.2.1 RESULTS
The input argument of two functions needs changes to vary the center frequency in the frequency
gain response. The functions are 'MakeERBFilters' and 'ff2gammatonemx', as displayed in figure
5, while Figures 6 and 7 display the results of the alterations done.
9
Figure 4: New window size of auditory spectrogram-STFT
3.1.2 PART B
3.1.2.1 RESULTS
The input argument of two functions needs changes to vary the center frequency in the frequency
gain response. The functions are 'MakeERBFilters' and 'ff2gammatonemx', as displayed in figure
5, while Figures 6 and 7 display the results of the alterations done.
9
You're viewing a preview
Unlock full access by subscribing today!

Figure 5: Changed Lower Center Frequency
Figure 6: Lowest center Frequency response of every channel
Figure 7: Lowest center Frequency response of 5th channel
10
Figure 6: Lowest center Frequency response of every channel
Figure 7: Lowest center Frequency response of 5th channel
10
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Figure 6 displays the frequency response graph for every channel, and figure 7 displays the plot
for the specific 5th channel of the filter. Figures 8 and 9 show the lower center frequency of the
gammatone filter within the specified 62 Hz to 78 Hz range.
Figure 8: gammanote filter lowest center frequency range
Figure 9: gammatone filter lowest center frequency range
11
for the specific 5th channel of the filter. Figures 8 and 9 show the lower center frequency of the
gammatone filter within the specified 62 Hz to 78 Hz range.
Figure 8: gammanote filter lowest center frequency range
Figure 9: gammatone filter lowest center frequency range
11

3.2 TASK 2
3.2.1 RESULTS
The sound signal is utilized using the "audioread" command for this task that uses the sa2.wav
audio file. The resulting sampling rate and the sound signal are stored in "sr" and "d",
respectively. Figure 10 display the generating time vector code on MATLAB.
Figure 10: Time vector
3.3 TASK 3
3.3.1 RESULTS
Figure 12 shows a plot of the sound signal stored as variable 'd' against the time vector obtained
from the code in task 2. Figure 11 shows the code for plotting the sound signal.
`
Figure 11: Code for Plotting Sound Signal
12
3.2.1 RESULTS
The sound signal is utilized using the "audioread" command for this task that uses the sa2.wav
audio file. The resulting sampling rate and the sound signal are stored in "sr" and "d",
respectively. Figure 10 display the generating time vector code on MATLAB.
Figure 10: Time vector
3.3 TASK 3
3.3.1 RESULTS
Figure 12 shows a plot of the sound signal stored as variable 'd' against the time vector obtained
from the code in task 2. Figure 11 shows the code for plotting the sound signal.
`
Figure 11: Code for Plotting Sound Signal
12
You're viewing a preview
Unlock full access by subscribing today!

Figure 12: Plotting sound signal
l
3.4 TASK 4
3.4.1 PART A
3.4.1.1 RESULTS
The defined specifications specify the input arguments for task 4. The sound signal needs
convolving and automatic processing into sound signal 'd' using the respective gammatone filter
defined by the "gammatonegram" function. Figure 13 displays auditory spectrogram code using
the gammatonegram function.
Figure 13: Code for auditory spectrogram
13
l
3.4 TASK 4
3.4.1 PART A
3.4.1.1 RESULTS
The defined specifications specify the input arguments for task 4. The sound signal needs
convolving and automatic processing into sound signal 'd' using the respective gammatone filter
defined by the "gammatonegram" function. Figure 13 displays auditory spectrogram code using
the gammatonegram function.
Figure 13: Code for auditory spectrogram
13
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

3.4.2 PART B
3.4.2.1 RESULTS
The same gammatonegram function is utilized for the coding in part E. for this task, the two
arguments of sound signal 'd' and the sampling rate 'sr' were entered. The utilized method is more
efficient than the traditional Fast Fourier Transformation and is as effective as Short Time
Fourier Transform (STFT). Figure 14 shows coding for the STFT spectrogram while, figures 15
and 16 display the Auditory Spectrogram and the STFT spectrogram resulting from the coding
Figure 14: Coding For fast STFT Spectrogram
Figure 15: Gammatone Spectrogram
14
3.4.2.1 RESULTS
The same gammatonegram function is utilized for the coding in part E. for this task, the two
arguments of sound signal 'd' and the sampling rate 'sr' were entered. The utilized method is more
efficient than the traditional Fast Fourier Transformation and is as effective as Short Time
Fourier Transform (STFT). Figure 14 shows coding for the STFT spectrogram while, figures 15
and 16 display the Auditory Spectrogram and the STFT spectrogram resulting from the coding
Figure 14: Coding For fast STFT Spectrogram
Figure 15: Gammatone Spectrogram
14

Figure 16: STFT Spectrogram
15
15
You're viewing a preview
Unlock full access by subscribing today!

4 CONCLUSION
This assignment tries to develop the human auditory system concepts using models defined on
MATLAB using some auditory functions. This assignment provides an understanding of the
natural Cochlea's model, function, and process that is incorporated using the visual auditory
spectrogram, which extracts valuable information from the sound signal such as timbre and pitch
and transmits it to the inner ear for processing of sound impulse. Various functions and codes of
MATLAB are utilized to check the auditory model's authenticity. A sample sound signal is
selected using a Cochlear Filterbank that contains various gammatone filters to obtain an
auditory spectrogram of the sound signal. The fast method is used to obtain the Short Time
Fourier Transform spectrogram. The required specifications needed specific window sizing
following the defined requirements. The gain response of the function is obtained in the desired
frequency range regarding specifications. The results are obtained by changing arguments in the
gammatonegram function for displaying STFT and auditory spectrograms. The assignment, after
providing results, successfully declares the working of these MATLAB functions just like the
human auditory system.
16
This assignment tries to develop the human auditory system concepts using models defined on
MATLAB using some auditory functions. This assignment provides an understanding of the
natural Cochlea's model, function, and process that is incorporated using the visual auditory
spectrogram, which extracts valuable information from the sound signal such as timbre and pitch
and transmits it to the inner ear for processing of sound impulse. Various functions and codes of
MATLAB are utilized to check the auditory model's authenticity. A sample sound signal is
selected using a Cochlear Filterbank that contains various gammatone filters to obtain an
auditory spectrogram of the sound signal. The fast method is used to obtain the Short Time
Fourier Transform spectrogram. The required specifications needed specific window sizing
following the defined requirements. The gain response of the function is obtained in the desired
frequency range regarding specifications. The results are obtained by changing arguments in the
gammatonegram function for displaying STFT and auditory spectrograms. The assignment, after
providing results, successfully declares the working of these MATLAB functions just like the
human auditory system.
16
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

5 REFERENCES
Kumar, S., Joseph, S., Gander, P., Barascud, N., Halpern, A. and Griffiths, T., 2016. A
Brain System for Auditory Working Memory. Journal of Neuroscience, 36(16), pp.4492-4505.
P. Key, A., 2016. Human Auditory Processing: Insights from Cortical Event-related
Potentials. AIMS Neuroscience, 3(2), pp.141-162.
Naples, J. and Ruckenstein, M., 2020. Cochlear Implant. Otolaryngologic Clinics of
North America, 53(1), pp.87-102.
Dombrowski, T., Rankovic, V. and Moser, T., 2018. Toward the Optical Cochlear
Implant. Cold Spring Harbor Perspectives in Medicine, 9(8), p.a033225.
Decorsiere, R., Sondergaard, P., MacDonald, E. and Dau, T., 2014. Inversion of auditory
spectrograms, traditional spectrograms, and other envelope representations. IEEE/ACM
Transactions on Audio, Speech, and Language Processing, pp.1-1.
Verhulst, S., Altoè, A. and Vasilkov, V., 2018. Computational modeling of the human
auditory periphery: Auditory-nerve responses, evoked potentials and hearing loss. Hearing
Research, 360, pp.55-75.
Khalighinejad, B., Herrero, J., Mehta, A. and Mesgarani, N., 2019. Adaptation of the
human auditory cortex to changing background noise. Nature Communications, 10(1).
17
Kumar, S., Joseph, S., Gander, P., Barascud, N., Halpern, A. and Griffiths, T., 2016. A
Brain System for Auditory Working Memory. Journal of Neuroscience, 36(16), pp.4492-4505.
P. Key, A., 2016. Human Auditory Processing: Insights from Cortical Event-related
Potentials. AIMS Neuroscience, 3(2), pp.141-162.
Naples, J. and Ruckenstein, M., 2020. Cochlear Implant. Otolaryngologic Clinics of
North America, 53(1), pp.87-102.
Dombrowski, T., Rankovic, V. and Moser, T., 2018. Toward the Optical Cochlear
Implant. Cold Spring Harbor Perspectives in Medicine, 9(8), p.a033225.
Decorsiere, R., Sondergaard, P., MacDonald, E. and Dau, T., 2014. Inversion of auditory
spectrograms, traditional spectrograms, and other envelope representations. IEEE/ACM
Transactions on Audio, Speech, and Language Processing, pp.1-1.
Verhulst, S., Altoè, A. and Vasilkov, V., 2018. Computational modeling of the human
auditory periphery: Auditory-nerve responses, evoked potentials and hearing loss. Hearing
Research, 360, pp.55-75.
Khalighinejad, B., Herrero, J., Mehta, A. and Mesgarani, N., 2019. Adaptation of the
human auditory cortex to changing background noise. Nature Communications, 10(1).
17
1 out of 17

Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.