> Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music. We are actively investigating these issues and working toward a solution.
Would be cool to hear some samples of this. I remember there was some hallucinated background music during the meditation demo in the original reveal livestream but haven't seen much beyond that. Artifact of training on podcasts to get natural intonation.
If anyone's wondering, here's a short sample. It quietly updated last night, and I ended up chatting for like an hour. It sounds as smart as before, but like 10x more emotionally intelligent. Laughter is the biggest giveaway, but the serious/empathetic tones for more therapy-like conversations are noticeable, too.
https://drive.google.com/file/d/16kiJ2hQW3KF4IfwYaPHdNXC-rsU...
I use advanced voice a lot and have come across many weird bugs.
1) Every response would be normal except end with a “whoosh” like one of those sound effects some mail clients use when an message is sent, and the model itself either couldn’t or wouldn’t acknowledge it.
2) The same except with someone knocking on a door. Like someone would play on a soundboard.
3) The entire history in the conversation disappearing after several minutes of back and forth, leading to the model having no idea what I’m talking about and acting as if it’s a fresh conversation.
4) Advanced voice mode stuttering because it hears its own voice and thinks it’s me interrupting (on a brand new iPhone 16 Pro, medium-low built in speaker volume and built-in mic).
5) Really weird changes in pronunciation or randomly saying certain words high-pitched, or suddenly using a weird accent.
And all of this was prior to these most recent changes.
It also stutters and repeats sometimes and says poor connection even though I know the connection is near-ideal.
I may know why that first one happens! They’re not correctly padding the latent in their decoder (by default torch pads with zeros, they should pad with whatever their latent’s representation of silence is). You can hear the same effect in songs generated with our music model: https://sonauto.ai/
If you pad your output with something that doesn't represent silence, then any outputs that happen to have a non-standard length (i.e. nearly all outputs) will end with whatever sound your padding bits represent in the model's embedding space. if "0000" represents "Whoosh," then most of your outputs will end in "whoosh."
Here's a non-AI example: If all HN comments had to be some multiple of 50 characters long and comments were padded with the letter "A," then most HN comments would look like the user was screaming at the end. AAAAAAAAAAAAAAAAAA
In addition to what Centigonal said, even if the autoencoder was trained on only speech data, an all zero vector is probably just out of distribution (decoder has never seen it before) and causes weird sounds. However, given the hallucinations we're seeing, the AE has (maybe unintentionally) likely seen a bunch of non-speech data like music and sound effects too.
They absolutely destroyed Sol. I’m not sure what it is now. the disinterest, the umms, the inability to speak directly to question, a new inflection but I am pretty mad. I am an avid voice user. I love to use the advanced voice while I’m doing tasks to explore new projects I want to work on and to get a basics understanding of home renovation tasks, etc. I had to finally change the voice to Maple but ran out of time to see if I could stand it. So disappointing.
At least know I know i’m not crazy and there were in-fact changes rolled out.
I have the feeling that the Advanced Voice Mode is significantly worse than when I used it earlier this week. The voice sounds disinterested, and has weird intonation. It used to be excellent for foreign language conversation practice, now significantly worse.
Edit: After using up my 15 minutes for testing, I have to say that the new voice is actually not bad, although I was used to something else. But it has a very clear "artificial" quality to it. It also sometimes misinterprets my input as something completely different than what I said, for example "please like my video and subscribe to my channel".
Stumbled across the new voice this afternoon after months of not using voice mode and after being impressed by the naturalness, was also let down by the disinterested tone. That combined with the platitudes and tendency to repeat back to me what I was saying without new information left me disappointed with the update.
Thanks. OpenAI's docs are frustratingly vague about the whole thing. It seems (assuming the 15 minute hard limit holds true) that I must have been conversing with advanced mode for 15 minutes since Advanced is the default for Plus subscribers on the mobile app, and then it must have possibly handed it off to the standard voice mode after that.
In the Plus subscription yes. You can also pay 200 dollars per month for Pro, and in that plan, the advanced voice mode is unlimited. 200 bucks is quite a lot, I've gotta say. I wish there was a middle ground option, but even for the 20 dollars for Pro, they should give you more than 15 minutes.
I wish they still had the voice mode that was _only_ text-to-speech, and speech-to-text. It didn't sound as good, but it was as smart as the underlying model. The advanced voice mode regularly goes off the rails for me, makes the same mistake repeatedly, and other things that the text-version of advanced LLMs hasn't done for months now.
> Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music.
This would be really funny if it weren’t real life.
I keep using standard voice mode (Cove) because I like its grounded voice a lot. The advanced Cove’s voice sounds too much like an overly happy guy. I wish I could tell it to chill and talk normally but it won’t.
I was using it earlier today and noticed something was different. It sounded more lethargic, and added a lot more "umms". It's not necessary bad, just something I need to get used to.
I always get a laugh asking it to talk like an Ent, and I made sure to check that it could still do that.
If there's an OpenAI PM reading this: please add the model selector for voice modes. 80% of this thread is users confused about which model they're using.
I don’t suppose you have a bunch of custom instructions telling ChatGPT to be concise, terse, etc do you? Those impact the voice model too and it turns out the “get to the point I’m not an idiot” pre-prompts people have been recommending really don’t translate well when the voice mode uses it as a personality.
> Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music. We are actively investigating these issues and working toward a solution.
Would be cool to hear some samples of this. I remember there was some hallucinated background music during the meditation demo in the original reveal livestream but haven't seen much beyond that. Artifact of training on podcasts to get natural intonation.
If anyone's wondering, here's a short sample. It quietly updated last night, and I ended up chatting for like an hour. It sounds as smart as before, but like 10x more emotionally intelligent. Laughter is the biggest giveaway, but the serious/empathetic tones for more therapy-like conversations are noticeable, too. https://drive.google.com/file/d/16kiJ2hQW3KF4IfwYaPHdNXC-rsU...
Did it really say partwheel or is it garbled?
I use advanced voice a lot and have come across many weird bugs.
1) Every response would be normal except end with a “whoosh” like one of those sound effects some mail clients use when an message is sent, and the model itself either couldn’t or wouldn’t acknowledge it.
2) The same except with someone knocking on a door. Like someone would play on a soundboard.
3) The entire history in the conversation disappearing after several minutes of back and forth, leading to the model having no idea what I’m talking about and acting as if it’s a fresh conversation.
4) Advanced voice mode stuttering because it hears its own voice and thinks it’s me interrupting (on a brand new iPhone 16 Pro, medium-low built in speaker volume and built-in mic).
5) Really weird changes in pronunciation or randomly saying certain words high-pitched, or suddenly using a weird accent.
And all of this was prior to these most recent changes.
It also stutters and repeats sometimes and says poor connection even though I know the connection is near-ideal.
I may know why that first one happens! They’re not correctly padding the latent in their decoder (by default torch pads with zeros, they should pad with whatever their latent’s representation of silence is). You can hear the same effect in songs generated with our music model: https://sonauto.ai/
Yeah we’re too lazy to fix it too
I’m super curious now, how does padding lead to repeatedly ending tts replies with what seem to be an actual non-speech sound effect?
If you pad your output with something that doesn't represent silence, then any outputs that happen to have a non-standard length (i.e. nearly all outputs) will end with whatever sound your padding bits represent in the model's embedding space. if "0000" represents "Whoosh," then most of your outputs will end in "whoosh."
Here's a non-AI example: If all HN comments had to be some multiple of 50 characters long and comments were padded with the letter "A," then most HN comments would look like the user was screaming at the end. AAAAAAAAAAAAAAAAAA
Also a decent AI example as most AI audio uses base64 encoding where AAAAAAAAA is a string of zeroes.
In addition to what Centigonal said, even if the autoencoder was trained on only speech data, an all zero vector is probably just out of distribution (decoder has never seen it before) and causes weird sounds. However, given the hallucinations we're seeing, the AE has (maybe unintentionally) likely seen a bunch of non-speech data like music and sound effects too.
they still need to post-train out the emissions of all the trapped souls
They absolutely destroyed Sol. I’m not sure what it is now. the disinterest, the umms, the inability to speak directly to question, a new inflection but I am pretty mad. I am an avid voice user. I love to use the advanced voice while I’m doing tasks to explore new projects I want to work on and to get a basics understanding of home renovation tasks, etc. I had to finally change the voice to Maple but ran out of time to see if I could stand it. So disappointing.
At least know I know i’m not crazy and there were in-fact changes rolled out.
I have the feeling that the Advanced Voice Mode is significantly worse than when I used it earlier this week. The voice sounds disinterested, and has weird intonation. It used to be excellent for foreign language conversation practice, now significantly worse.
Edit: After using up my 15 minutes for testing, I have to say that the new voice is actually not bad, although I was used to something else. But it has a very clear "artificial" quality to it. It also sometimes misinterprets my input as something completely different than what I said, for example "please like my video and subscribe to my channel".
Stumbled across the new voice this afternoon after months of not using voice mode and after being impressed by the naturalness, was also let down by the disinterested tone. That combined with the platitudes and tendency to repeat back to me what I was saying without new information left me disappointed with the update.
Is this new? I'm on the Plus plan and just a few days ago carried on a conversation for around 45 minutes while on a walk with my dog.
Agreed though, the new voice (at least for Sol) accent sounds significantly degraded particularly when conversing in Chinese.
Apparently it's 6 months old [1]. You might be using the standard voice mode (the advanced one has just 1 voice IIUC).
[1] https://www.reddit.com/r/OpenAI/comments/1hdamrm/so_advanced...
Thanks. OpenAI's docs are frustratingly vague about the whole thing. It seems (assuming the 15 minute hard limit holds true) that I must have been conversing with advanced mode for 15 minutes since Advanced is the default for Plus subscribers on the mobile app, and then it must have possibly handed it off to the standard voice mode after that.
Advanced https://help.openai.com/en/articles/9617425-advanced-voice-m...
Standard https://help.openai.com/en/articles/8400625-voice-mode-faq
No advanced voice mode has multiple voices
There’s a 15 minute limit?
In the Plus subscription yes. You can also pay 200 dollars per month for Pro, and in that plan, the advanced voice mode is unlimited. 200 bucks is quite a lot, I've gotta say. I wish there was a middle ground option, but even for the 20 dollars for Pro, they should give you more than 15 minutes.
I wish they still had the voice mode that was _only_ text-to-speech, and speech-to-text. It didn't sound as good, but it was as smart as the underlying model. The advanced voice mode regularly goes off the rails for me, makes the same mistake repeatedly, and other things that the text-version of advanced LLMs hasn't done for months now.
Don’t they? Press the microphone button for speech-to-text, and the speaker button for text-to-speech
In the App:
Settings> Personalization> Custom Instructions then Advanced Dropdown. Uncheck Advanced Voice
On Desktop site:
Profile Button> Customize ChatGPT then Advanced Dropdown. Uncheck Advanced Voice
> Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music.
This would be really funny if it weren’t real life.
I keep using standard voice mode (Cove) because I like its grounded voice a lot. The advanced Cove’s voice sounds too much like an overly happy guy. I wish I could tell it to chill and talk normally but it won’t.
I was using it earlier today and noticed something was different. It sounded more lethargic, and added a lot more "umms". It's not necessary bad, just something I need to get used to.
I always get a laugh asking it to talk like an Ent, and I made sure to check that it could still do that.
In my daily use, I just want the answer, not a performance. I'd rather it sound like a smart assistant, not my best friend.
If there's an OpenAI PM reading this: please add the model selector for voice modes. 80% of this thread is users confused about which model they're using.
i think there's only one llm backbone for voice. it's 4o.
Today I used ChatGPT and the voice was disgusting for the first time since I use ChatGPT(months).
It was the voice of someone(a woman) that was confrontational, someone who does not like you.
It made me want to close and remove the chat immediately.
I don’t suppose you have a bunch of custom instructions telling ChatGPT to be concise, terse, etc do you? Those impact the voice model too and it turns out the “get to the point I’m not an idiot” pre-prompts people have been recommending really don’t translate well when the voice mode uses it as a personality.