This document summarizes a research paper that proposes a multimodal biometric system using palmprint and speech signals. It extracts features from each modality using different methods. For speech, it uses Subband Cepstral Coefficients extracted via a wavelet packet transform. For palmprint, it uses a Modified Canonical Form method. The features are fused at the score level using a weighted sum rule. The system is tested on a database of over 300 subjects, and results show improved recognition rates compared to single modalities.