[ 双模型]Bert-VITS2|GPT-SOVITS 舌尖上的中国配音模型

该帖子部分内容已隐藏
付费阅读
280
此内容为付费阅读,请付费后查看

配音介绍

《舌尖上的中国》的旁白展现了其卓越的音色,清亮悦耳,细腻地勾勒出美食的诱人之处。语速把控得恰到好处,既不过快以免错过细节,也不过慢显得拖沓,确保了信息的流畅传递,让观众能够悠然沉浸在每一道佳肴的描绘之中。旁白中充满了对食材的深情与对美食文化的敬仰,这种真挚的情感深深触动了观众的心弦。无论是标准普通话还是方言的发音,都极为准确,展现了中华语言的独特魅力与深厚文化底蕴。而旁白者掌握的节奏感更是让整个讲述过程如同一场美食交响乐,引领观众穿越味觉与心灵的双重盛宴。

GPT-SOVITS模型配音效果  

鉴于GPT-SOVITS模型自回归特性,即其配音情绪高度依赖于所提供的参考音频,特此说明:本视频所展示的配音情绪仅为采用某一特定参考音频时的效果示例,并不全面反映GPT-SOVITS模型能够生成的全部情绪范围及最终配音质量的上限。模型的最终表现将随着不同参考音频的输入而展现出多样化。

bert-vits模型下载

gpt-sovits模型

 

 

Bert-VITS2模型配音效果

 

 

训练日志

'skip_optimizer': True}, 'data': {'training_files': 'Data/shejian/filelists/train.list', 'validation_files': 'Data/shejian/filelists/val.list', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 128, 'mel_fmin': 0.0, 'mel_fmax': None, 'add_blank': True, 'n_speakers': 896, 'cleaned_text': True, 'spk2id': {'shejian': 0}}, 'model': {'use_spk_conditioned_encoder': True, 'use_noise_scaled_mas': True, 'use_mel_posterior_encoder': False, 'use_duration_discriminator': True, 'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 8, 2, 2], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256}, 'version': '2.1', 'model_dir': 'Data\\shejian\\models'}
2023-12-05 14:47:30,936	models	WARNING	E:\Bert-VITS2.1\Bert-VITS2.1 is not a git repository, therefore hash value comparison will be ignored.
2023-12-05 14:47:32,100	models	INFO	Loaded checkpoint 'Data\shejian\models\DUR_0.pth' (iteration 0)
2023-12-05 14:47:32,391	models	ERROR	emb_g.weight is not in the checkpoint
2023-12-05 14:47:32,441	models	INFO	Loaded checkpoint 'Data\shejian\models\G_0.pth' (iteration 0)
2023-12-05 14:47:32,612	models	INFO	Loaded checkpoint 'Data\shejian\models\D_0.pth' (iteration 0)
2023-12-05 14:47:47,265	models	INFO	Train Epoch: 1 [0%]
2023-12-05 14:47:47,272	models	INFO	[2.182603597640991, 2.9057748317718506, 10.952286720275879, 28.820287704467773, 3.6838698387145996, 4.135763168334961, 0, 0.0001]
2023-12-05 14:47:51,167	models	INFO	Saving model and optimizer state at iteration 1 to Data\shejian\models\G_0.pth
2023-12-05 14:47:52,115	models	INFO	Saving model and optimizer state at iteration 1 to Data\shejian\models\D_0.pth
2023-12-05 14:47:52,665	models	INFO	Saving model and optimizer state at iteration 1 to Data\shejian\models\DUR_0.pth
2023-12-05 14:48:53,433	models	INFO	====> Epoch: 1
2023-12-05 14:49:52,301	models	INFO	====> Epoch: 2
2023-12-05 14:50:51,309	models	INFO	====> Epoch: 3
2023-12-05 14:51:50,258	models	INFO	====> Epoch: 4
2023-12-05 14:52:49,014	models	INFO	====> Epoch: 5
2023-12-05 14:53:47,336	models	INFO	====> Epoch: 6
2023-12-05 14:54:46,230	models	INFO	====> Epoch: 7
2023-12-05 14:55:45,226	models	INFO	====> Epoch: 8
2023-12-05 14:56:43,798	models	INFO	====> Epoch: 9
2023-12-05 14:57:41,554	models	INFO	====> Epoch: 10
2023-12-05 14:58:39,477	models	INFO	====> Epoch: 11
2023-12-05 14:59:38,301	models	INFO	====> Epoch: 12
2023-12-05 15:00:36,724	models	INFO	====> Epoch: 13
2023-12-05 15:01:34,415	models	INFO	====> Epoch: 14
2023-12-05 15:02:32,157	models	INFO	====> Epoch: 15
2023-12-05 15:03:30,983	models	INFO	====> Epoch: 16
2023-12-05 15:04:29,296	models	INFO	====> Epoch: 17
2023-12-05 15:05:27,058	models	INFO	====> Epoch: 18
2023-12-05 15:06:24,779	models	INFO	====> Epoch: 19
2023-12-05 15:07:23,544	models	INFO	====> Epoch: 20
2023-12-05 15:08:21,983	models	INFO	====> Epoch: 21
2023-12-05 15:09:19,862	models	INFO	====> Epoch: 22
2023-12-05 15:10:17,712	models	INFO	====> Epoch: 23
2023-12-05 15:11:16,538	models	INFO	====> Epoch: 24
2023-12-05 15:11:23,071	models	INFO	Train Epoch: 25 [10%]
2023-12-05 15:11:23,079	models	INFO	[2.2657299041748047, 2.438272476196289, 7.2701592445373535, 19.681480407714844, 2.563971996307373, 2.161208391189575, 2000, 9.988006897470668e-05]

如何使用配音模型

1,gpt-sovits模型云端部署

https://aiaf.cc/gpt-sovits-yunduan/.html

2,gpt-sovits模型本地部署

https://aiaf.cc/gpt-sovits/.html

2,bert-vits模型

https://aiaf.cc/bert-vits2/.html

如果您想一对一远程教学模型安装、模型训练,请联系微信 xiaoming1870

声音版权使用声明

本网站展示的 AI 声音模型由站长及工作室精心创作并提供。遵循非商业性使用原则,仅作娱乐用途,重视并遵守版权所有者权益,未获授权也不声称拥有使用权。模型整理等产生的费用仅覆盖服务成本,不涉及版权收费。所有活动在法律框架内进行,尊重版权、合法使用分享。如有疑问、需版权信息或建议反馈,可随时联系,共同促进 AI 声音艺术发展与营造尊重版权氛围。

声音版权使用声明

本网站展示的 AI 声音模型由站长及工作室精心创作并提供。遵循非商业性使用原则,仅作娱乐用途,重视并遵守版权所有者权益,未获授权也不声称拥有使用权。模型整理等产生的费用仅覆盖服务成本,不涉及版权收费。所有活动在法律框架内进行,尊重版权、合法使用分享。如有疑问、需版权信息或建议反馈,可随时联系,共同促进 AI 声音艺术发展与营造尊重版权氛围。

请登录后发表评论

    没有回复内容