Affective social anthropomorphic intelligent system
Abstract
At present, intelligent virtual assistants (IVA) are not only about delivering the functionalities
and increasing their performances; they also need a socially interactive personality. As human
conversational styles are measured by our sense of humor, personalities, tone of voice, these
qualities have become essential for conversational intelligent virtual assistants. Our proposed
system is an anthropomorphic intelligent system that can hold a proper human-like
conversation with emotion and personality. It can also be able to imitate any person's voice
given; voice audio data is available. Initially, the temporal audio wave data will be converted
to frequency domain data (Mel-Spectrogram), which contains distinct patterns for audio
features like the notes, pitch, rhythm, and melody. A parallel CNN, Transformer-Encoder, is
used to predict the emotion from 7 different audio data classes. This audio is also fed to the
deep-speech, an RNN model that consists of 5 hidden layers. From the spectrogram, it
generates the text transcription. Then the transcript text is transferred to the multi-domain
conversation agent, using blended skill talk and transformer-based retrieve-and-generate
generation strategy and beam-search decoding an appropriate textual response is generated,
which in turn gets synthesized to audio using WaveGlow that is based on WaveNet and Glow.
It learns an invertible mapping of data to a latent space that can be manipulated and generates
a Mel-spectrogram frame based on previous Mel-spectrogram frames. Finally, from the
generated spectrogram, the waveform is generated using WaveGlow. A fine-tuned system can
be used in the following but not limited to applications like dubbing, voice assistant, re-creating
new movies with old actors.