A new modeling approach of the "effective" signal processing in the auditory system was developed which describes effects of spectral and temporal integration in amplitude-modulation detection and masking. Envelope fluctuations within each auditory channel are analyzed with a modulation filterbank. The parameters of the filterbank are the same for all auditory filters and were adjusted to allow the model to account for modulation detection and modulation masking data with narrowband carriers at a high center frequency. In the detection stage, the outputs of all modulation filters from all excited peripheral channels are combined linearly with optimal weights. To integrate information across time, a "multiple-look" strategy is implemented within the detection stage which allows the model to account for long time constants derived from the data on modulation integration without introducing true long-term integration. Model predictions are compared with both own experimental results and with experimental data from the literature. A large variety of psychoacoustical experiments can be well described by the model. This supports the hypothesis that amplitude fluctuations are processed by modulation-frequency-selective channels. The model might also be used in applications such as psychoacoustical experiments with hearing-impaired listeners, speech intelligibility and speech quality predictions. <engl.>