From the course: OpenAI API: Building Front-End Voice Apps with the Realtime API and WebRTC
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Adding text transcripts for accessibility - OpenAI API Tutorial
From the course: OpenAI API: Building Front-End Voice Apps with the Realtime API and WebRTC
Adding text transcripts for accessibility
- [Instructor] Accessibility is a core promise of the web. So if you build a web app that is not accessible, the work is not done. And that applies to chat apps too, especially ones that use voice. So in oh four transcripts, I've expanded our example to also use text transcripts in the conversation. Let me show you what I mean by that. - [Bot] Hello, how can I help you today? - [Instructor] Write me a haiku about a fox. - [Bot] clever fox dashes through movement woods silent night quacks by the duck pond. - [Instructor] So what you're seeing here is I'm using the existing chat feature to also display both the incoming messages from the AI and whatever I set. For this to work, we need to do two completely different things. As I've explained before, the AI outputs both audio and text, so we can always capture the text coming from the audio. But the API doesn't automatically generate a transcript of what I'm saying unless I explicitly tell it to. That's done by going into the session…
Contents
-
-
-
-
(Locked)
Hands on with five JavaScript AI voice apps1m 39s
-
(Locked)
OpenAI authentication with ephemeral tokens4m 34s
-
(Locked)
Understanding the WebRTC flow3m 6s
-
(Locked)
A bare-bones JavaScript Realtime API implementation6m 29s
-
(Locked)
Configuring assistant messages and settings3m 21s
-
(Locked)
Adding visualizations with the Web Audio API2m 35s
-
(Locked)
Adding text chat to a Realtime app3m
-
(Locked)
Adding text transcripts for accessibility3m 8s
-
(Locked)
Function calling with the Realtime API4m 1s
-
(Locked)
-