Mintplex Labs is on a mission to make LLMs accessible to everyone—and Docker is helping make that happen.
The open source platform AnythingLLM allows users to build with RAG, AI Agents, and more.
By containerizing with Docker and publishing on Docker Hub’s GenAI catalog, Mintplex Labs Inc is scaling LLM adoption across hobbyists, enterprises, and everyone in between.
In this video, Timothy Carambat talks:
🤖 Easy deployment with Docker containers
🤖 Privacy-first & self-hosted
🤖 1M+ Docker Hub pulls
This is how you democratize GenAI, the open source way. 👇
#Docker#GenAI#LLM#OpenSource#AIForEveryone
Everyone, my name's Timothy Karambit and I'm the founder of Mineplex Labs and the creator of Anything LM. Anything LM is an all-in-one AI productivity tool that allows you to use LM's in a way that actually makes sense. We're not really a playground where more for using LM's to do things for you. So basic productivity tasks, AI agents, all of this right out-of-the-box without having to be a developer. Now, you may have heard of Anything LM already, in which case that's great. And if you haven't, you may be wondering what the specialty of anything Elm is. Anything LLM strives to do basically everything that you need without you having to install plugins, write any custom code, or really any complex setup. Multi user management, connecting to many different LM and models at the same time, chat with documents, running AI agents, creating your own AI agents, and all of this is all built in with all of the technology baked into the. Higher product, so no worries about a vector database or anything about that. Anything Elm handles it. At Mineplex Labs, our mission is to make complex technology accessible to the everyday person. Anything LLM really succeeds in that fashion. You can imagine most people will land on a GitHub, see the value of a piece of software and immediately just want to click play. Docker is one of the closest things that you can have to a play button for a GitHub repo and so that is one of the reasons we went with Docker. Now you may be asking. What we also offer a desktop app. Who's that for? The Docker version has so many features. It's very server based and also multi user focused. If you're looking for a more single player experience with even less setup than we offer a desktop app as a convenient way of basically kicking the tires on anything LM so you can find out how powerful the tool is before really making any kind of technical investment into the product. One of the great things about deploying anything LM on Docker has been we are distributing anything LM. At all levels of an ecosystem, everywhere from the, you know, intro computer science 101 person just learning about Docker and AI pulling our image, all the way to giant organizational enterprises who are wanting to really jump into AI. But without any of the nuance and the whole complexity of like, ohh, where's our data? Ohh we should pay $1,000,000 to some cloud LM provider. A lot of people just want the tooling. The LM is always important. That the tooling is even more so when you're talking about organizational use of AI, and that is something that I think we have excelled really well in. We have everyone from individual contributors all the way up to vice presidents of IT deploying anything LM through Docker just so that their organization or their small team or just that person themselves can use AI. One of the largest benefits of having this open source project be deployed through a Dockerized container. Has been the consistency of the experience. We know no matter where you're deploying that Docker container, you're getting the same experience as everyone else. And that's really important, especially for an app that aims to be super accessible and usable by the everyday person. It's really important that that experience remain consistent. And no matter where you deploy anything LM via Docker, you're going to get that experience. And that's exactly what we aim for and it also makes contributions. Extremely consistent. We know adding a feature here is not going to break any deployments that are currently running on any type of operating system layer. It doesn't matter with docker. With docker we just deploy it and it just works and that takes away so much stress and complexity from managing such a large open source project. Lastly, one of the biggest benefits for anything hello LM via docker has been the fact that we can just add that into the repo and all of the tooling to auto publish and build the latest. Image and do all of these things and just deliver the highest quality code as quickly as possible to all of our users has been through publishing on the Docker hub and that's actually been so nice to have because now we don't have to worry about rebuilding the app and putting it on our own CDN or any of the other complexities that just are solved. Wouldn't you just use something like Docker? Some of the biggest advantages we've seen deploying anything LM via Docker has just been the consistency of the experience from. One machine to another, no matter where people are deploying anything L And that's really important to us because otherwise we will just spend all of our time managing bugs on random operating systems or random deployment kind of architectures and all these other things that are just not there when you use something like Docker. And that's honestly such a blessing for us. We're really small team. Anything LM is only a core team of three people, including myself, and using tools like Docker and their deployment infrastructure. Takes away so much unnecessary complexity that we would need to otherwise maintain. And that's just one of the largest benefits for a small team. We can move super fast on every feature and every little request that comes through on our repo, and then we know that when that gets deployed, that experience will work because we're using Docker. One of the main benefits of people using anything LM in general, but also deploying it on their own servers or their own hardware has been the fact that they don't need to rely. Anymore on paying a monthly ChatGPT subscription or paying for some other cloud LM that is such a high expense. Most people now have hardware that is very, very capable of running compressed or quantized LMS on device. We connect to all of those popular providers that you probably are using right now, Olama and LM Studio, Local AI, all of these open and closed source local AI inference. Engines function with anything LLM, so you can have a server running both of these programs and get a full end to end private experience on servers you maintain using no external third party applications. And now you can imagine one of the top priorities for anything LM is privacy. If we want you to be running an AI model locally and using our application locally, one of the biggest advantages of that is privacy. The documents you upload, the chats you send. The agents you run, the actions you take in anything LM are on the device you deployed it on. They don't go anywhere else unless you want them to. If you want to use a cloud provider for more heavy lifting AI generative tasks, we allow you to do that. If you want to store your vector databases on some other service just because you pay for it or you like their uptime or you love their performance, you can do that in anything LM. And one of the things that has been really great about deploying through Docker is now there's this whole hub and. Explore page that we are featured on and that has helped catalyze our growth and also adoption of our tool massively. We have been able to gain significant traction and just adoption of our tool and I'm extremely thankful for that from anything LM partnership with Docker. We've gotten more users, we've gotten more feature requests, we've gotten more stars on GitHub. We've just got more people interested in what we're doing and contributing and that is extremely powerful for the indefinite. Future it's almost certain anything LM being deployed on a server based architecture is going to be best done with Docker. No if ands or butts about it. It's so consistent and it's just so repeatable and the experience that is provided is just the same for everyone regardless of what you're using. There's really no reason for us to stray from that. As Docker begins to evolve to be more of a catalog and a place for people to explore and find solutions, we expect that anything. OEM will be one of those solutions that are surfaced and we'll try to surface that in the best way possible through our partnership. And if you are a developer who is looking to start building in AI, you don't want to have to deal with all of the complexity of, oh, I'm running a server with XYZ but your service doesn't work, don't, don't waste your time on that. If you just use Docker, all of those problems go away and you can focus on building a product as opposed to doing bug fixes for servers. And weird ohh operating systems that people decide to deploy on. It's inevitable. The more users you get, the more edge cases you run into. With Docker, you can really cut away a lot of those edge cases. So definitely start with Docker. Docker has helped Simplex Labs make its applications more accessible to individuals, teams, and enterprises alike. We're working together to empower more people to unlock the potential of MLM's without any of the headaches of development or development processes. If you're a developer, jumpstart your AI. Projects by exploring the Docker AI catalog and don't forget to go to the Docker Hub and check out the anything L image while you're there and that is how anything LLM leverages Docker and it's powerful containerization service to deploy millions of installations globally on all types of infrastructure. If you want more videos like this then be sure to give this video a like and this channel subscribe and we'll show you more videos about how you can use docker not only for deploying AI applications but just any application in general using Docker.
Sr. Developer | Cross-Platform | 800+ YT
2wThanks for sharing