From the course: Vibe Coding with Lovable: From Idea to Prototype in Under an Hour (No Code Required)
Vibe coding limitations: Data privacy and security - Lovable Tutorial
From the course: Vibe Coding with Lovable: From Idea to Prototype in Under an Hour (No Code Required)
Vibe coding limitations: Data privacy and security
So in this video, we're going to cover privacy and security, both for our own application and for our users. Let's start off with Lovable's privacy policies. So let's go to our third tab here on Lovable, and I have the lovable.dev slash privacy webpage open. Now we can see here, in terms of the terms of service, we give Lovable a perpetual royalty license to use our customer data for business purposes, including operating, improving services, training models and analytics. So this means, unless you're on the business plan over here, any information that you put into Levelable may be used for training data. Now we're building a pretty simple application here, but if you're building something more advanced or proprietary, keep this in mind. If you're using it in a business context, run it by your legal team to make sure it doesn't verify any internal policies. If we scroll down, we can see there's some additional policies of sharing with third parties. So I recommend you dig into this a little bit more as you continue to use other tools. And going back to Lovable here, you can see that there are some confirmation screens that pop up. Now, if we look at the actions that our AI models are doing, we can see that some of the things that they do can be done automatically while others ask for permissions. In this case, adding, deleting, or modifying data is one of those operations. We can see it here that we have ask each time as a feature. We can also enable always allow or never allow. So depending on what we want the AI models to do, we can have different settings when we're vibe coding. Now there are certain risks here when we use different applications. In a future video, we'll cover connectors, which allows our AI model for the purpose of vibe coding to interact with different applications. This changes how our AI model is using data and sending data as well. So we'll go into more details there, but wanted to bring that up because it does have security and privacy concern implications. Now going back to our application, we've been building a language coach. In this case, we're not collecting too much personal information. If we go to Cloud here and click on Users, and scroll down, if you're looking at the information that we have, we have the e-mail and some of the progress the user is making. So if we scroll up and go back to Database, we have Profiles, User Streaks, and Progress. We're not asking for any information that is particularly sensitive, but it really depends on your jurisdiction on what you'll need to do for compliance and privacy reasons. For example, in California, there's the California Consumer Privacy Act. And you can see here that the different information and policies that you'll have to implement if you build a large-scale application. For example, what kind of information that we're collecting about users, our right to delete, opt-out, and non-discrimination policies. So when we're deploying a lovable application, going back to lovable, clicking close and publishing our application, depending how users are going to be interacting with it, we have to be aware of what information they're adding in. In practice here, there's not too many ways to import information into our application. Usually we just have our learning cards here and we can import an article from a URL. Now, in theory, the user can import some kind of personal information from a URL, but because it's public, it's fairly low risk. But in practice, users can do some fairly silly things in your application, so just keep an eye out on it. Now let's go through and see how we can track if somebody's trying to do something suspicious in our application. If we go back to Lovable, we go to Cloud, we can go to Logs to see what different activities are happening in the application. We can see the logs here. We can also click on AI to see how many times we've called the model. So in this case, only a few times. And if we go back, we can go further into our analytics page on the top right to see how many visitors we've had, page views, and so forth. So if we have some kind of attack from a hacker that's trying to spam and have us log in, we can double check that this is happening and see if we need to close our site or ban new users from activating. If we scroll down, we'll see a little bit more information. And going back to Cloud, we can click on Users and go to Authentication Settings here. And going back to the page where we set up our initial user sign-in, we can create more rigorous policies. For example, for e-mail here, we can change a few things. We can make longer passwords a policy, We can change how we do e-mail authentication. We can change our re-authentication for password changes. A few different things. Now, if we exit out of here, we can also disable signups. So just in case we're seeing a lot of spam coming in, we can disable these to make sure that we can stop the process of new users signing up and get to the bottom of things. Now, finally, let's go to the security tab. Now, we briefly went through the security tab to fix two issues in the past, But we can rerun this auto-security setting to see if there's any issues. So you can see it on the left-hand side. Or we can connect to third-party tools via GitHub. There's also code scanners and other types of analysis that we can do based on the GitHub ecosystem. So that can be pretty helpful as well. So it looks like our model is performing our security analysis, and we can see the different tools being used here and here as well. So reading through files, running different scans. and so forth. So this is a nice feature that we're doing this for free without incurring additional credit costs. So we definitely don't want to launch an application that has any security implications. Let's scroll down here and see what the implications are. Okay, here we go. So we have a medium warning here. Let's make this a little bit bigger. We have a medium warning here for the edge function not supporting authentication right now. So this is how we call Gemini Flash to do our import. So this is a warning. So we can ask for a correction here to be added. And we also have an info information about limited URL validation. So potentially there could be some other attacks that might happen. So let's go ahead and ask Lovable to fix these issues. Let's click on Fix Edge Function Authentication and click Enter. And before launching your product, it's a good idea to have a senior engineer review the code, even though the AI model has checked its own work. It's an important step to make sure that you're building secure and privacy-minded applications, and you don't end up on a front page of a news article about vibe-coding security issues. Right, there we go. Looks like we have our changes here. Let's go ahead and test for functionality. So let's click on French. Go to Import from URL. Let's open up our French article here. OK, let's click on Import from URL, paste it here, generate flashcards. And scrolling through, it looks like it still works. So whenever we make a change with an AI model, always make sure to test and verify because the best way to secure something is to turn it off And obviously, we don't want that to happen. Now, earlier, we mentioned that if you have poor security, you can have somebody who can incur additional costs on your application. In the next video, we'll go into more details on how we can be cost-minded, both when building and deploying our application.