From the course: OpenAI API: Building Front-End Voice Apps with the Realtime API and WebRTC

Unlock this course with a free trial

Join today to access over 24,800 courses taught by industry experts.

Adding text transcripts for accessibility

Adding text transcripts for accessibility

- [Instructor] Accessibility is a core promise of the web. So if you build a web app that is not accessible, the work is not done. And that applies to chat apps too, especially ones that use voice. So in oh four transcripts, I've expanded our example to also use text transcripts in the conversation. Let me show you what I mean by that. - [Bot] Hello, how can I help you today? - [Instructor] Write me a haiku about a fox. - [Bot] clever fox dashes through movement woods silent night quacks by the duck pond. - [Instructor] So what you're seeing here is I'm using the existing chat feature to also display both the incoming messages from the AI and whatever I set. For this to work, we need to do two completely different things. As I've explained before, the AI outputs both audio and text, so we can always capture the text coming from the audio. But the API doesn't automatically generate a transcript of what I'm saying unless I explicitly tell it to. That's done by going into the session…

Contents