Recent research about emotion regulation and forward models have suggested that emotional signals are produced in a goal directed way, and monitored for errors like other intentional actions. We created a digital audio platform to covertly modify the emotional tone of participants voices while they talked, in the direction of happiness,sadness or fear. We found that, while external listeners perceived the audio transformations as natural examples of the intended emotions, the great majority of the participants remained unaware that their own voices were being manipulated. We take this to indicates that people are not continuously monitoring their own voice to make sure it meets a predetermined emotional target. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed, as measured both by self-report and skin conductance responses (SCR). This we believe is the first evidence of peripheral feedback effects on emotional experience in the auditory domain. As such, this result reinforces the wider framework of self-perception theory; that we often use the same inferential strategies to understand ourselves as those we use to understand others.