I’m trying to get npc’s to talk, I figured the easiest way would be to use a derma sound board that just plays whatever mp3 I designed for their speech. The sound works but it would really tie it all together to have their mouth move, even if it wasn’t in sync it would be better than nothing. Any thoughts on the best way to proceed?
I don’t think auto lip sync works with .mp3s, I believe it only works with .wavs. I may be wrong, but I think this is how it worked,
I’ll try it out but thank you for the heads up. You most likely saved me quite a headache.
[editline]29th July 2015[/editline]
How do I rig their mouths? The wiki says you have to rig their mouths but it doesn’t mention how.
You don’t need to rig anything. You only rig mouths for custom models.
So literally just have a wav file server side and play it using emitter function?
util.AddNetworkString("higuys") net.Receive("higuys",function () Entity( 1 ):EmitSound("med/higuys.wav", 100,100,1,CHAN_AUTO) end)
The emitter sound plays but it the npc (dr. breen) doesn’t move his lips at all…