From the course: OpenAI API: Building Front-End Voice Apps with the Realtime API and WebRTC
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Adding visualizations with the Web Audio API - OpenAI API Tutorial
From the course: OpenAI API: Building Front-End Voice Apps with the Realtime API and WebRTC
Adding visualizations with the Web Audio API
- [Instructor] When you build an audio-based interface, it's best practice to provide some sort of visual indicator that audio is playing. That way the user knows that something is happening, even if their audio isn't working properly, and they'll be able to troubleshoot. In this basic reference implementation, there's no such indicator, so the user has no idea if audio is playing or if their mic is working. That's why I built this second example audio visualizer. Here we have two visualizers, one for the AI and one for the microphone to show the user exactly what is happening. If I create this connection you'll see it in action. - [Computer] Hello, how can I assist you today? - [Instructor] Can you write a haiku about a duck? - [Computer] Duck glides on pond, feathers- - [Instructor] Okay, so what you saw here looked really advanced, but what's actually happening is I'm hooking into the web audio API. This is an API that exists in all modern browsers that allows a webpage to grab any…
Contents
-
-
-
-
(Locked)
Hands on with five JavaScript AI voice apps1m 39s
-
(Locked)
OpenAI authentication with ephemeral tokens4m 34s
-
(Locked)
Understanding the WebRTC flow3m 6s
-
(Locked)
A bare-bones JavaScript Realtime API implementation6m 29s
-
(Locked)
Configuring assistant messages and settings3m 21s
-
(Locked)
Adding visualizations with the Web Audio API2m 35s
-
(Locked)
Adding text chat to a Realtime app3m
-
(Locked)
Adding text transcripts for accessibility3m 8s
-
(Locked)
Function calling with the Realtime API4m 1s
-
(Locked)
-