This issue primarily occurs when the conversation becomes longer, especially after around 30 seconds of continuous speech.
In my last interview, there were two interviewers. The Parakeet AI identified the first interviewer as Client 1 and the second interviewer as Client 2. Initially, it was able to distinguish between both speakers correctly and keep a track of what each said.
However, once Client 2 spoke after Client 1, and when the conversation had become longer overall, the AI began to lose track of Client 1. After Client 2 finished speaking, Client 1 re-entered the conversation and asked me a question, but the AI failed to recognize or process that input. It effectively ignored what Client 1 said after Client 2 had spoken.
Additionally, when Client 1 spoke continuously for an extended duration, roughly around 30 seconds, the AI began ignoring inputs from Client 2 as well and then would only consider what client 1 was saying.
In summary, as the script length increases, the AI struggles to maintain context across multiple speakers and starts missing or ignoring parts of the conversation.
Also, when the script became long, the scroll of the audio script that shows what’s the interviewer saying in real time didn’t work either. I am talking about the scroll related to the audio script when it is maximized, not he transcript scroll.
Kindly look into the issue as earliest as you can.
Apart from that, Parakeet AI has been doing good.
Thanks.
Please authenticate to join the conversation.
In Review
Bug Report
22 days ago
Get notified by email when there are changes.
In Review
Bug Report
22 days ago
Get notified by email when there are changes.