Kimi Audio一个通用的音频基础模型处理各种任务如自动语音识别(ASR)、音频问答(AQA)、自动音频字幕(AAC)、语音情感识别(SER)、声音事件/场景分类(SEC/ASC)和端到端语音对话

Kimi Audio被设计为一个通用的音频基础模型,能够在一个统一的框架内处理各种音频处理任务。

主要功能包括:

  • 通用功能:处理各种任务,如自动语音识别(ASR)、音频问答(AQA)、自动音频字幕(AAC)、语音情感识别(SER)、声音事件/场景分类(SEC/ASC)和端到端语音对话。
  • 最先进的性能:在众多音频基准测试中取得SOTA结果(见评估和技术报告)。
  • 大规模预训练:对超过1300万小时的各种音频数据(语音、音乐、声音)和文本数据进行预训练,实现强大的音频推理和语言理解。
  • 新颖的架构:采用混合音频输入(连续声学矢量+离散语义标记)和具有并行头的LLM核心来生成文本和音频标记。
  • 高效推理:基于流匹配的块流式去标记器,用于低延迟音频生成。
  • 开源:发布用于预培训和教学微调的代码和模型检查点,并发布一个全面的评估工具包,以促进社区研究和开发。

官网:MoonshotAI/Kimi-Audio: Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation

模型位置: https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct

安装Kimi-Audio

git clone https://github.com/MoonshotAI/Kimi-Audio.git
cd Kimi-Audio
git submodule update --init --recursive
pip install -r requirements.txt

安装huggingface 下载工具

!pip install huggingface-hub

下载模型

!huggingface-cli download moonshotai/Kimi-Audio-7B-Instruct --local-dir ./Kimi-Audio-7B-Instruct --local-dir-use-symlinks False

运行示例

import soundfile as sf
from kimia_infer.api.kimia import KimiAudio

# --- 1. Load Model ---
model_path = "moonshotai/Kimi-Audio-7B-Instruct" 
model = KimiAudio(model_path=model_path, load_detokenizer=True)

# --- 2. Define Sampling Parameters ---
sampling_params = {
    "audio_temperature": 0.8,
    "audio_top_k": 10,
    "text_temperature": 0.0,
    "text_top_k": 5,
    "audio_repetition_penalty": 1.0,
    "audio_repetition_window_size": 64,
    "text_repetition_penalty": 1.0,
    "text_repetition_window_size": 16,
}

# --- 3. Example 1: Audio-to-Text (ASR) ---
messages_asr = [
    # You can provide context or instructions as text
    {"role": "user", "message_type": "text", "content": "Please transcribe the following audio:"},
    # Provide the audio file path
    {"role": "user", "message_type": "audio", "content": "test_audios/asr_example.wav"}
]

# Generate only text output
_, text_output = model.generate(messages_asr, **sampling_params, output_type="text")
print(">>> ASR Output Text: ", text_output) # Expected output: "这并不是告别,这是一个篇章的结束,也是新篇章的开始。"


# --- 4. Example 2: Audio-to-Audio/Text Conversation ---
messages_conversation = [
    # Start conversation with an audio query
    {"role": "user", "message_type": "audio", "content": "test_audios/qa_example.wav"}
]

# Generate both audio and text output
wav_output, text_output = model.generate(messages_conversation, **sampling_params, output_type="both")

# Save the generated audio
output_audio_path = "output_audio.wav"
sf.write(output_audio_path, wav_output.detach().cpu().view(-1).numpy(), 24000) # Assuming 24kHz output
print(f">>> Conversational Output Audio saved to: {output_audio_path}")
print(">>> Conversational Output Text: ", text_output) # Expected output: "当然可以,这很简单。一二三四五六七八九十。"

# --- 5. Example 3: Audio-to-Audio/Text Conversation with Multiturn ---

messages = [
    {"role": "user", "message_type": "audio", "content": "test_audios/multiturn/case2/multiturn_q1.wav"},
    # This is the first turn output of Kimi-Audio
    {"role": "assistant", "message_type": "audio-text", "content": ["test_audios/multiturn/case2/multiturn_a1.wav", "当然可以,这很简单。一二三四五六七八九十。"]},
    {"role": "user", "message_type": "audio", "content": "test_audios/multiturn/case2/multiturn_q2.wav"}
]
wav, text = model.generate(messages, **sampling_params, output_type="both")


# Generate both audio and text output
wav_output, text_output = model.generate(messages_conversation, **sampling_params, output_type="both")

# Save the generated audio
output_audio_path = "output_audio.wav"
sf.write(output_audio_path, wav_output.detach().cpu().view(-1).numpy(), 24000) # Assuming 24kHz output
print(f">>> Conversational Output Audio saved to: {output_audio_path}")
print(">>> Conversational Output Text: ", text_output) # Expected output: "没问题,继续数下去就是十一十二十三十四十五十六十七十八十九二十。"

print("Kimi-Audio inference examples complete.")

因为在kaggle空间不足,无法进一步测试,就先到这里。

调试

报错:No module named 'kimia_infer.models.tokenizer.glm4.speech_tokenizer'

明明有kimia_infor这个目录啊,到里面去看,原来它引用了glm4的repo .....

怪不得官方有这句:

git submodule update --init --recursive

当时自己偷懒了,没有写这一句,结果就报错了。看来一点懒都不能偷,要按照官方这样写:

git clone https://github.com/MoonshotAI/Kimi-Audio.git
cd Kimi-Audio
git submodule update --init --recursive
pip install -r requirements.txt

报错:OSError: [Errno 28] No space left on device

  File "/usr/local/lib/python3.11/dist-packages/huggingface_hub/file_download.py", line 497, in http_get
    temp_file.write(chunk)
OSError: [Errno 28] No space left on device

需要的空间太大了....kaggle P100只提供了 57.6G的空间,不够

放弃

你可能感兴趣的:(人工智能,xcode,ide,kaggle,Kimi,Audio)