React Native语音消息终极指南:录音、播放、发送,打造微信级体验
想在你的React Native应用中加入类似微信的语音消息功能吗?本文将带你一步步实现录音、播放、发送等核心功能,打造媲美微信的语音交互体验。无论你是新手还是经验丰富的开发者,都能从中找到有用的信息。
1. 需求分析与技术选型
在开始之前,让我们明确一下需求:
- 录音: 用户能够方便地录制语音,并保存录音文件。
- 播放: 用户能够播放已录制的语音文件。
- 发送: 将录音文件发送给服务器或指定用户。
- 平台兼容性: 方案需要在iOS和Android平台上都能良好运行。
基于以上需求,我们选择以下技术方案:
react-native-sound: 用于播放音频文件,支持多种格式,API简单易用。(https://github.com/zmxv/react-native-sound)react-native-audio-recorder-player: 用于录制音频,提供丰富的配置选项。(https://github.com/hyochan/react-native-audio-recorder-player)react-native-permissions: 用于处理Android和iOS平台的权限请求,确保应用能够访问麦克风和存储。(https://github.com/zoontek/react-native-permissions)rn-fetch-blob: 用于文件上传和下载,功能强大且灵活。(https://github.com/joltup/rn-fetch-blob)
为什么选择这些库?
这些库都是React Native社区中比较成熟和常用的解决方案,拥有良好的文档和社区支持。它们在功能、性能和平台兼容性方面都表现出色,能够满足我们的需求。
2. 环境搭建与权限申请
首先,确保你的React Native项目已经初始化完成。然后,安装所需的依赖:
yarn add react-native-sound react-native-audio-recorder-player react-native-permissions rn-fetch-blob
# 或者
npm install react-native-sound react-native-audio-recorder-player react-native-permissions rn-fetch-blob
接下来,我们需要配置权限。在android/app/src/main/AndroidManifest.xml文件中,添加以下权限:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
对于iOS,需要在ios/<YourProjectName>/Info.plist文件中添加以下权限描述:
<key>NSMicrophoneUsageDescription</key>
<string>App需要访问您的麦克风以录制语音消息。</string>
<key>NSPhotoLibraryAddUsageDescription</key>
<string>App需要访问您的相册以保存录音文件。</string>
现在,我们需要在应用启动时请求必要的权限。创建一个PermissionHelper.js文件:
import { PermissionsAndroid, Platform } from 'react-native';
import { check, request, PERMISSIONS, RESULTS } from 'react-native-permissions';
const checkAndroidPermissions = async () => {
try {
const granted = await PermissionsAndroid.requestMultiple([
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
PermissionsAndroid.PERMISSIONS.WRITE_EXTERNAL_STORAGE,
PermissionsAndroid.PERMISSIONS.READ_EXTERNAL_STORAGE,
]);
if (
granted['android.permission.RECORD_AUDIO'] === PermissionsAndroid.RESULTS.GRANTED &&
granted['android.permission.WRITE_EXTERNAL_STORAGE'] === PermissionsAndroid.RESULTS.GRANTED &&
granted['android.permission.READ_EXTERNAL_STORAGE'] === PermissionsAndroid.RESULTS.GRANTED
) {
console.log('Android permissions granted');
return true;
} else {
console.log('Android permissions denied');
return false;
}
} catch (err) {
console.warn(err);
return false;
}
};
const checkiOSPermissions = async () => {
const microphoneStatus = await check(PERMISSIONS.IOS.MICROPHONE);
const photoLibraryStatus = await check(PERMISSIONS.IOS.PHOTO_LIBRARY);
if (microphoneStatus === RESULTS.GRANTED && photoLibraryStatus === RESULTS.GRANTED) {
console.log('iOS permissions already granted');
return true;
}
const requestMicrophone = await request(PERMISSIONS.IOS.MICROPHONE);
const requestPhotoLibrary = await request(PERMISSIONS.IOS.PHOTO_LIBRARY);
if (requestMicrophone === RESULTS.GRANTED && requestPhotoLibrary === RESULTS.GRANTED) {
console.log('iOS permissions granted');
return true;
} else {
console.log('iOS permissions denied');
return false;
}
};
export const checkPermissions = async () => {
if (Platform.OS === 'android') {
return checkAndroidPermissions();
} else {
return checkiOSPermissions();
}
};
在你的主组件中,调用checkPermissions函数:
import React, { useEffect } from 'react';
import { View, Text } from 'react-native';
import { checkPermissions } from './PermissionHelper';
const App = () => {
useEffect(() => {
checkPermissions();
}, []);
return (
<View>
<Text>Voice Message App</Text>
</View>
);
};
export default App;
3. 录音功能实现
现在,让我们开始实现录音功能。创建一个VoiceRecorder.js组件:
import React, { useState, useEffect } from 'react';
import { View, Text, TouchableOpacity, StyleSheet, Platform } from 'react-native';
import AudioRecorderPlayer, {AVEncoderAudioQualityIOSType,AVEncodingOption,AudioSet} from 'react-native-audio-recorder-player';
import RNFetchBlob from 'rn-fetch-blob';
const audioRecorderPlayer = new AudioRecorderPlayer();
const VoiceRecorder = () => {
const [isRecording, setIsRecording] = useState(false);
const [recordSecs, setRecordSecs] = useState(0);
const [recordTime, setRecordTime] = useState('00:00:00');
const [audioFilePath, setAudioFilePath] = useState('');
const audioSet: AudioSet = {
AudioEncoderAndroid: AVEncodingOption.AAC,
AudioSourceAndroid: 'AUDIO_SOURCE_MIC',
AVEncoderAudioQualityAndroid: 'MAX',
AVEncoderBitRateAndroid: 32000,
AVSampleRateAndroid: 44100,
AVEncoderAudioQualityIOS: AVEncoderAudioQualityIOSType.high,
AVSampleRateIOS: 44100,
MeteringEnabled: false,
AudioEncodingIOS: 'aac',
};
useEffect(() => {
audioRecorderPlayer.setSubscriptionDuration(0.09);
return () => {
audioRecorderPlayer.removeRecordBackListener();
audioRecorderPlayer.removePlayBackListener();
};
}, []);
const startRecording = async () => {
if (isRecording) return;
const path = Platform.select({
ios: 'hello.m4a',
android: `sdcard/hello.mp4`, // Change to .mp4 for Android
});
try {
const uri = await audioRecorderPlayer.startRecorder(path, audioSet);
audioRecorderPlayer.addRecordBackListener((e) => {
setRecordSecs(e.current_position);
setRecordTime(audioRecorderPlayer.mmssss(Math.floor(e.current_position)));
});
console.log(`uri: ${uri}`);
setAudioFilePath(uri);
setIsRecording(true);
} catch (error) {
console.error('startRecording error', error);
}
};
const stopRecording = async () => {
if (!isRecording) return;
try {
const result = await audioRecorderPlayer.stopRecorder();
audioRecorderPlayer.removeRecordBackListener();
setRecordSecs(0);
console.log(result);
setIsRecording(false);
setRecordTime('00:00:00');
} catch (error) {
console.error('stopRecording error', error);
}
};
return (
<View style={styles.container}>
<Text style={styles.recordTime}>{recordTime}</Text>
<TouchableOpacity style={styles.button} onPressIn={startRecording} onPressOut={stopRecording}>
<Text style={styles.buttonText}>{isRecording ? 'Recording...' : 'Hold to Record'}</Text>
</TouchableOpacity>
</View>
);
};
const styles = StyleSheet.create({
container: {
alignItems: 'center',
justifyContent: 'center',
padding: 20,
},
recordTime: {
fontSize: 20,
marginBottom: 10,
},
button: {
backgroundColor: '#4CAF50',
padding: 15,
borderRadius: 10,
},
buttonText: {
color: 'white',
fontSize: 18,
textAlign: 'center',
},
});
export default VoiceRecorder;
代码解释:
AudioRecorderPlayer:创建AudioRecorderPlayer实例,用于录音和播放。useState:使用useStatehook管理录音状态、录音时长和录音文件路径。useEffect:在组件挂载时设置录音监听器,并在卸载时移除。startRecording:开始录音,设置录音文件路径,并添加录音监听器,实时更新录音时长。stopRecording:停止录音,移除录音监听器,并重置录音状态。TouchableOpacity:使用TouchableOpacity组件创建录音按钮,onPressIn事件触发startRecording函数,onPressOut事件触发stopRecording函数。
注意事项:
- Android平台需要指定录音文件的存储路径,并确保应用拥有写入外部存储的权限。
- iOS平台录音文件默认存储在应用沙盒中。
- 可以根据需要自定义录音文件的格式和编码。
4. 播放功能实现
接下来,我们实现播放功能。修改VoiceRecorder.js组件,添加播放按钮和相关逻辑:
import React, { useState, useEffect } from 'react';
import { View, Text, TouchableOpacity, StyleSheet, Platform } from 'react-native';
import AudioRecorderPlayer, {AVEncoderAudioQualityIOSType,AVEncodingOption,AudioSet} from 'react-native-audio-recorder-player';
import RNFetchBlob from 'rn-fetch-blob';
const audioRecorderPlayer = new AudioRecorderPlayer();
const VoiceRecorder = () => {
const [isRecording, setIsRecording] = useState(false);
const [recordSecs, setRecordSecs] = useState(0);
const [recordTime, setRecordTime] = useState('00:00:00');
const [audioFilePath, setAudioFilePath] = useState('');
const [isPlaying, setIsPlaying] = useState(false);
const [currentPositionSec, setCurrentPositionSec] = useState(0);
const [currentDurationSec, setCurrentDurationSec] = useState(0);
const [playTime, setPlayTime] = useState('00:00:00');
const [duration, setDuration] = useState('00:00:00');
const audioSet: AudioSet = {
AudioEncoderAndroid: AVEncodingOption.AAC,
AudioSourceAndroid: 'AUDIO_SOURCE_MIC',
AVEncoderAudioQualityAndroid: 'MAX',
AVEncoderBitRateAndroid: 32000,
AVSampleRateAndroid: 44100,
AVEncoderAudioQualityIOS: AVEncoderAudioQualityIOSType.high,
AVSampleRateIOS: 44100,
MeteringEnabled: false,
AudioEncodingIOS: 'aac',
};
useEffect(() => {
audioRecorderPlayer.setSubscriptionDuration(0.09);
return () => {
audioRecorderPlayer.removeRecordBackListener();
audioRecorderPlayer.removePlayBackListener();
};
}, []);
const startRecording = async () => {
if (isRecording) return;
const path = Platform.select({
ios: 'hello.m4a',
android: `sdcard/hello.mp4`,
});
try {
const uri = await audioRecorderPlayer.startRecorder(path, audioSet);
audioRecorderPlayer.addRecordBackListener((e) => {
setRecordSecs(e.current_position);
setRecordTime(audioRecorderPlayer.mmssss(Math.floor(e.current_position)));
});
console.log(`uri: ${uri}`);
setAudioFilePath(uri);
setIsRecording(true);
} catch (error) {
console.error('startRecording error', error);
}
};
const stopRecording = async () => {
if (!isRecording) return;
try {
const result = await audioRecorderPlayer.stopRecorder();
audioRecorderPlayer.removeRecordBackListener();
setRecordSecs(0);
console.log(result);
setIsRecording(false);
setRecordTime('00:00:00');
} catch (error) {
console.error('stopRecording error', error);
}
};
const startPlaying = async () => {
if (isPlaying) return;
try {
console.log('start playing');
const msg = await audioRecorderPlayer.startPlayer(audioFilePath);
const volume = await audioRecorderPlayer.setVolume(1.0);
console.log(`audioRecorderPlayer.startPlayer: ${msg}`);
console.log(`audioRecorderPlayer.setVolume: ${volume}`);
audioRecorderPlayer.addPlayBackListener((e) => {
setCurrentPositionSec(e.current_position);
setCurrentDurationSec(e.duration);
setPlayTime(audioRecorderPlayer.mmssss(Math.floor(e.current_position)));
setDuration(audioRecorderPlayer.mmssss(Math.floor(e.duration)));
});
setIsPlaying(true);
} catch (error) {
console.error('startPlaying error', error);
}
};
const stopPlaying = async () => {
console.log('stop playing');
audioRecorderPlayer.stopPlayer();
audioRecorderPlayer.removePlayBackListener();
setIsPlaying(false);
};
return (
<View style={styles.container}>
<Text style={styles.recordTime}>{recordTime}</Text>
<TouchableOpacity style={styles.button} onPressIn={startRecording} onPressOut={stopRecording}>
<Text style={styles.buttonText}>{isRecording ? 'Recording...' : 'Hold to Record'}</Text>
</TouchableOpacity>
<TouchableOpacity style={styles.button} onPressIn={startPlaying} onPressOut={stopPlaying}>
<Text style={styles.buttonText}>{isPlaying ? 'Playing...' : 'Play'}</Text>
</TouchableOpacity>
</View>
);
};
const styles = StyleSheet.create({
container: {
alignItems: 'center',
justifyContent: 'center',
padding: 20,
},
recordTime: {
fontSize: 20,
marginBottom: 10,
},
button: {
backgroundColor: '#4CAF50',
padding: 15,
borderRadius: 10,
marginTop: 10,
},
buttonText: {
color: 'white',
fontSize: 18,
textAlign: 'center',
},
});
export default VoiceRecorder;
代码解释:
isPlaying:使用useStatehook管理播放状态。startPlaying:开始播放录音文件,设置播放监听器,实时更新播放时间和总时长。stopPlaying:停止播放录音文件,移除播放监听器,并重置播放状态。
5. 发送功能实现
最后,我们需要实现发送功能,将录音文件上传到服务器。这里使用rn-fetch-blob库来实现文件上传:
import RNFetchBlob from 'rn-fetch-blob';
const uploadAudio = async (filePath) => {
try {
const apiUrl = 'YOUR_UPLOAD_API_ENDPOINT'; // 替换为你的上传API地址
const fileName = filePath.split('/').pop();
RNFetchBlob.fetch('POST', apiUrl, {
'Content-Type': 'multipart/form-data',
}, [
{
name: 'audio',
filename: fileName,
data: RNFetchBlob.wrap(filePath),
},
])
.then((response) => {
if (response.respInfo.status === 200) {
console.log('Upload successful');
// 处理上传成功后的逻辑
} else {
console.log('Upload failed', response.respInfo);
// 处理上传失败后的逻辑
}
})
.catch((error) => {
console.error('Upload error', error);
// 处理上传错误
});
} catch (error) {
console.error('uploadAudio error', error);
}
};
代码解释:
uploadAudio:定义uploadAudio函数,用于上传录音文件。RNFetchBlob.fetch:使用RNFetchBlob.fetch函数发送POST请求,上传录音文件。YOUR_UPLOAD_API_ENDPOINT:替换为你的上传API地址。response.respInfo.status:判断上传是否成功,根据状态码处理不同的逻辑。
注意事项:
- 需要替换
YOUR_UPLOAD_API_ENDPOINT为你的实际上传API地址。 - 服务器端需要处理文件上传请求,并将文件保存到指定位置。
- 可以根据需要添加上传进度提示和错误处理。
6. 整合与优化
现在,我们将录音、播放和发送功能整合到VoiceRecorder.js组件中:
import React, { useState, useEffect } from 'react';
import { View, Text, TouchableOpacity, StyleSheet, Platform } from 'react-native';
import AudioRecorderPlayer, {AVEncoderAudioQualityIOSType,AVEncodingOption,AudioSet} from 'react-native-audio-recorder-player';
import RNFetchBlob from 'rn-fetch-blob';
const audioRecorderPlayer = new AudioRecorderPlayer();
const VoiceRecorder = () => {
const [isRecording, setIsRecording] = useState(false);
const [recordSecs, setRecordSecs] = useState(0);
const [recordTime, setRecordTime] = useState('00:00:00');
const [audioFilePath, setAudioFilePath] = useState('');
const [isPlaying, setIsPlaying] = useState(false);
const [currentPositionSec, setCurrentPositionSec] = useState(0);
const [currentDurationSec, setCurrentDurationSec] = useState(0);
const [playTime, setPlayTime] = useState('00:00:00');
const [duration, setDuration] = useState('00:00:00');
const audioSet: AudioSet = {
AudioEncoderAndroid: AVEncodingOption.AAC,
AudioSourceAndroid: 'AUDIO_SOURCE_MIC',
AVEncoderAudioQualityAndroid: 'MAX',
AVEncoderBitRateAndroid: 32000,
AVSampleRateAndroid: 44100,
AVEncoderAudioQualityIOS: AVEncoderAudioQualityIOSType.high,
AVSampleRateIOS: 44100,
MeteringEnabled: false,
AudioEncodingIOS: 'aac',
};
useEffect(() => {
audioRecorderPlayer.setSubscriptionDuration(0.09);
return () => {
audioRecorderPlayer.removeRecordBackListener();
audioRecorderPlayer.removePlayBackListener();
};
}, []);
const startRecording = async () => {
if (isRecording) return;
const path = Platform.select({
ios: 'hello.m4a',
android: `sdcard/hello.mp4`,
});
try {
const uri = await audioRecorderPlayer.startRecorder(path, audioSet);
audioRecorderPlayer.addRecordBackListener((e) => {
setRecordSecs(e.current_position);
setRecordTime(audioRecorderPlayer.mmssss(Math.floor(e.current_position)));
});
console.log(`uri: ${uri}`);
setAudioFilePath(uri);
setIsRecording(true);
} catch (error) {
console.error('startRecording error', error);
}
};
const stopRecording = async () => {
if (!isRecording) return;
try {
const result = await audioRecorderPlayer.stopRecorder();
audioRecorderPlayer.removeRecordBackListener();
setRecordSecs(0);
console.log(result);
setIsRecording(false);
setRecordTime('00:00:00');
} catch (error) {
console.error('stopRecording error', error);
}
};
const startPlaying = async () => {
if (isPlaying) return;
try {
console.log('start playing');
const msg = await audioRecorderPlayer.startPlayer(audioFilePath);
const volume = await audioRecorderPlayer.setVolume(1.0);
console.log(`audioRecorderPlayer.startPlayer: ${msg}`);
console.log(`audioRecorderPlayer.setVolume: ${volume}`);
audioRecorderPlayer.addPlayBackListener((e) => {
setCurrentPositionSec(e.current_position);
setCurrentDurationSec(e.duration);
setPlayTime(audioRecorderPlayer.mmssss(Math.floor(e.current_position)));
setDuration(audioRecorderPlayer.mmssss(Math.floor(e.duration)));
});
setIsPlaying(true);
} catch (error) {
console.error('startPlaying error', error);
}
};
const stopPlaying = async () => {
console.log('stop playing');
audioRecorderPlayer.stopPlayer();
audioRecorderPlayer.removePlayBackListener();
setIsPlaying(false);
};
const uploadAudio = async () => {
try {
const apiUrl = 'YOUR_UPLOAD_API_ENDPOINT'; // 替换为你的上传API地址
const fileName = audioFilePath.split('/').pop();
RNFetchBlob.fetch('POST', apiUrl, {
'Content-Type': 'multipart/form-data',
}, [
{
name: 'audio',
filename: fileName,
data: RNFetchBlob.wrap(audioFilePath),
},
])
.then((response) => {
if (response.respInfo.status === 200) {
console.log('Upload successful');
// 处理上传成功后的逻辑
} else {
console.log('Upload failed', response.respInfo);
// 处理上传失败后的逻辑
}
})
.catch((error) => {
console.error('Upload error', error);
// 处理上传错误
});
} catch (error) {
console.error('uploadAudio error', error);
}
};
return (
<View style={styles.container}>
<Text style={styles.recordTime}>{recordTime}</Text>
<TouchableOpacity style={styles.button} onPressIn={startRecording} onPressOut={stopRecording}>
<Text style={styles.buttonText}>{isRecording ? 'Recording...' : 'Hold to Record'}</Text>
</TouchableOpacity>
<TouchableOpacity style={styles.button} onPressIn={startPlaying} onPressOut={stopPlaying}>
<Text style={styles.buttonText}>{isPlaying ? 'Playing...' : 'Play'}</Text>
</TouchableOpacity>
<TouchableOpacity style={styles.button} onPress={uploadAudio}>
<Text style={styles.buttonText}>Send</Text>
</TouchableOpacity>
</View>
);
};
const styles = StyleSheet.create({
container: {
alignItems: 'center',
justifyContent: 'center',
padding: 20,
},
recordTime: {
fontSize: 20,
marginBottom: 10,
},
button: {
backgroundColor: '#4CAF50',
padding: 15,
borderRadius: 10,
marginTop: 10,
},
buttonText: {
color: 'white',
fontSize: 18,
textAlign: 'center',
},
});
export default VoiceRecorder;
优化建议:
- UI优化: 可以使用更美观的UI组件,例如自定义的录音按钮和播放器。
- 音质优化: 可以调整录音参数,例如采样率和编码,以提高音质。
- 错误处理: 可以添加更完善的错误处理机制,例如网络错误和文件读写错误。
- 性能优化: 可以使用缓存和异步处理等技术,提高应用性能。
7. 总结
本文介绍了如何使用React Native实现类似微信的语音消息功能,包括录音、播放和发送。通过使用react-native-sound、react-native-audio-recorder-player、react-native-permissions和rn-fetch-blob等库,我们可以快速构建一个功能完善的语音消息模块。希望本文能够帮助你更好地理解和应用React Native,打造更出色的移动应用。
下一步学习:
- 了解更多关于
react-native-sound和react-native-audio-recorder-player的API。 - 学习如何使用WebSockets实现实时语音聊天。
- 探索更多关于音频处理和编解码的技术。
希望这篇文章能帮助你,欢迎在评论区留下你的问题和建议!