Titelbild von Learn Data EngineeringLearn Data Engineering
Learn Data Engineering

Learn Data Engineering

IT-Dienstleistungen und IT-Beratung

Würzburg Area, Bavaria 46.879 Follower:innen

We teach Data Engineering and help companies recruit top talent

Info

Learn Data Engineering with our Academy and earn the Associate Data Engineer Certificate. We help companies recruit top engineers leveraging our huge network.

Website
https://learndataengineering.com
Branche
IT-Dienstleistungen und IT-Beratung
Größe
2–10 Beschäftigte
Hauptsitz
Würzburg Area, Bavaria
Art
Einzelunternehmen (Gewerbe, Freiberufler etc.)
Gegründet
2019

Orte

Beschäftigte von Learn Data Engineering

Updates

  • Learn Data Engineering hat dies direkt geteilt

    Profil von Andreas Kretz anzeigen
    Andreas Kretz Andreas Kretz ist Influencer:in

    Learn Data Engineering156.892 Follower:innen

    Clicking around in the cloud console is not building infrastructure. Become a skilled data engineer and learn how to do this properly. That means:   ✅ Writing Terraform ✅ Using Git to manage your infra ✅ Automating your deployments ✅ And versioning everything   That’s exactly what we’re doing in one of my Academy projects. 1️⃣ We define our Azure infrastructure in Terraform 2️⃣ CI/CD runs via Azure DevOps 3️⃣ One change to your code kicks off: → a new Docker image → a fresh Lambda → and the whole thing gets rolled out automatically   And honestly? It’s not that hard. If you know how to write SQL, you can learn Terraform, too. 💪   ⚠️ Dive right into this Azure project via the link in the comments!

    • Kein Alt-Text für dieses Bild vorhanden
  • Learn Data Engineering hat dies direkt geteilt

    Profil von Andreas Kretz anzeigen
    Andreas Kretz Andreas Kretz ist Influencer:in

    Learn Data Engineering156.892 Follower:innen

    What happens when you mix: 🐍 Python + 🐳 Docker +⚡ AWS Lambda + ⏰ Timeseries DB +🔌 API + 📈 Grafana ?   You get a fully deployable, scheduled ETL pipeline. One that grabs live weather data and pipes it into a time series database, then visualizes it automatically.   Here’s the flow: 1️⃣ You write a Python script to pull live data from a weather API 2️⃣ Package it into a Docker container 3️⃣ Push that container to AWS ECR 4️⃣ Deploy it on Lambda 5️⃣ Schedule it using EventBridge 6️⃣ Store the data in TDengine, a time series database 7️⃣ Visualize it all beautifully in Grafana 🥂 Done.   You’ve just built a real-time data system that updates itself. No cron jobs, no cloud console clicking.   This is one of those projects where people go: “Wait, you can actually do that with Lambda and a time series DB??”   Yes. You can. And you should.   Great for showing off real ETL skills, AWS experience, and your ability to build complete pipelines, not just scripts.   ⚠️ Dive right into it via the link in the comments!

    • Kein Alt-Text für dieses Bild vorhanden
  • Learn Data Engineering hat dies direkt geteilt

    Profil von Andreas Kretz anzeigen
    Andreas Kretz Andreas Kretz ist Influencer:in

    Learn Data Engineering156.892 Follower:innen

    In this video, I do a quick walkthrough of the new Amazon SageMaker provided by Amazon Web Services (AWS) and show how everything works in the new unified interface. 👉 https://lnkd.in/da-hsuKM We get into: ✅ how SageMaker brings data analytics, notebooks, AI, and ML together in one place ✅ accessing data from S3 and the AWS Glue Data Catalog ✅ working with notebooks using both Python and SQL ✅ using the built-in AI agent to generate code, and exploring data with charts You’ll also see visual ETL pipelines, query editing with Athena, model selection and deployment, inference endpoints, and how to work directly in JupyterLab or VS Code, all inside SageMaker. If you’re curious how Amazon SageMaker looks and feels today, this is a fast end-to-end overview. #dataengineering #aws #awspartner #sagemaker #amazonsagemaker

    • Kein Alt-Text für dieses Bild vorhanden
  • Learn Data Engineering hat dies direkt geteilt

    Profil von Andreas Kretz anzeigen
    Andreas Kretz Andreas Kretz ist Influencer:in

    Learn Data Engineering156.892 Follower:innen

    AI Agents are just REST API calls: Change my mind! Without APIs no AI Agents. Here are 5 things you actually need to master:   ✅ 𝗔𝗣𝗜 𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀: Understand what APIs are, how they work behind the scenes, and why REST is the go-to standard in most systems ✅ 𝗔𝗣𝗜 𝗗𝗲𝘀𝗶𝗴𝗻: Learn how to structure your endpoints, choose proper request/response formats, and write documentation others can actually use ✅ 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 + 𝗣𝘆𝘁𝗵𝗼𝗻: Build high-performance APIs with a modern framework that’s simple but powerful ✅ 𝗔𝗣𝗜 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: Use tools like Postman to make sure your APIs behave as expected before they hit production ✅ 𝗗𝗼𝗰𝗸𝗲𝗿 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁: Containerize your API so it can run anywhere, reliably and at scale   If you’re working with data pipelines, backend services, or systems that need to integrate with others, API skills are a must have!   Well-designed APIs keep your systems flexible, scalable, and easier to maintain over time.   But many developers struggle with poor structure, lack of testing, or APIs that are hard to deploy and maintain.   Check out my Building APIs with FastAPI course: ⚠️ Find the full training link in the comments!   ⚠️ We also help with personal Coaching to learn Data Engineering. Link is in the comments as well 👍

    • Kein Alt-Text für dieses Bild vorhanden
  • Learn Data Engineering hat dies direkt geteilt

    Profil von Andreas Kretz anzeigen
    Andreas Kretz Andreas Kretz ist Influencer:in

    Learn Data Engineering156.892 Follower:innen

    Your database is probably doing way more work than you think. Especially if you store time-series data in PostgreSQL. At small scale, everything looks fine. But once the dataset grows, the workload changes completely: • data keeps coming in • queries scan time ranges • indexes grow out of control That’s where things start to break down. I wrote a breakdown of why time-series data behaves differently and how tools like TimescaleDB change the way you handle it 👇 #dataengineering #timeseriesdata #timeseriesdatabase #tigerdata #timescaledb #sponsored

  • Learn Data Engineering hat dies direkt geteilt

    Profil von Andreas Kretz anzeigen
    Andreas Kretz Andreas Kretz ist Influencer:in

    Learn Data Engineering156.892 Follower:innen

    Just watched Ravit Jains interview with Zilliz at NVIDIA GTC. Great discussion about where AI is heading! What stood out to me is the prediction that the focus shifts away from tables towards unstructured data like images, videos, and text, and the need to actually search through them.   Last year I built a small RAG project with Milvus on my own machine and played around with it for a bit. Milvus is the open-source vector database developed by Zilliz. You can run it locally and they also provide a fully managed service with Zilliz Cloud.   The news is: Zilliz Cloud is now available on Amazon Web Services (AWS) Marketplace! Check it out here: https://lnkd.in/dYjCNXXe   For teams building GenAI applications this makes it much easier to integrate vector search into an existing AWS setup without running and managing the infrastructure yourself. I really like that.    They also mentioned some real use cases like autonomous driving or drug discovery, which shows where all this is already being applied.   Curious where this is going next. #AWSPartner #NVIDIAGTC #theravitshow #Zilliz

  • Learn Data Engineering hat dies direkt geteilt

    Profil von André Tempera anzeigen

    Self-employed2471 Follower:innen

    Last year I was awarded a free one year subscription to Learn Data Engineering Data Academy and now I’m happy to share that I’ve obtained a new certification: Associate Data Engineer! Special thanks to Learn Data Engineering and Andreas Kretz for the opportunity to learn from the academy and step up my skills! It's a great place to learn with comprehensive content!

    • Kein Alt-Text für dieses Bild vorhanden
  • Check out our new FREE LAB 🤩 Topic: Orchestration!

  • Big news, everyone 😃

    Profil von Andreas Kretz anzeigen
    Andreas Kretz Andreas Kretz ist Influencer:in

    🚀 New Partner at LearnDataEngineering. com: IBM 🚀 Big news, everyone! IBM is now partner at Learn Data Engineering in our Free Labs & Partners section! You can check out the full IBM partner page with focus on the watsonx. data integration right here:   👉 https://lnkd.in/dv49s54Z   I still remember how impressive it was when IBM Watson won Jeopardy! years ago. Seeing how that early breakthrough has evolved into today’s watsonx portfolio is genuinely exciting for me as a data engineer.   With watsonx, IBM focuses on AI & agents, data, governance, observability and orchestration.   Especially relevant for us data engineers today is their work around watsonx. data integration. It provides a unified way to design and manage data pipelines across batch, streaming, and replication workloads. With no-code, low-code, and pro-code options that make pipeline creation more accessible for teams of all skill levels. The platform focuses on reducing tool overload, simplifying integration, and delivering AI-ready data efficiently across hybrid and multi cloud environments.    On the new partner page, you’ll find: 🔹 A video introduction to watsonx. data integration with practical demos 🔹 An exploration of how AI agents can streamline integration and engineering workflows 🔹 An eBook on how AI-powered data integration reduces complexity and accelerates insights 🔹 Additional curated resources for modern data engineering   I'm also creating a hands-on video right now that shows how to integrate your own data from Google Cloud Storage to power an AI agent that answers your questions about the data.    Having IBM as a partner helps us create more great content for you 🙂   More partners, labs, and free learning resources are on the way, all designed to help you learn Data Engineering with real tools, real examples, and real-world context. #LearnDataEngineering #DataEngineering #IBM #IBMPartner #IBMwatsonx

    • Kein Alt-Text für dieses Bild vorhanden
  • There’s a free webinar coming up on October 15: Best Practices for Writing ETL/ELT Pipelines hosted by Astronomer   Many data engineers still struggle with fragile DAGs, pipelines that break when data changes, or “hacks” to make things dynamic. If you’ve ever wanted your ETL/ELT pipelines to be more reliable, easier to maintain, and less painful to build then this session is for you.   🗓 October 15, 2025 | 11am ET / 4pm BST 🎙 Hosted by Tamara Fingerlin, Developer Advocate at Astronomer Save your spot here: https://lnkd.in/efqmXYDZ Top tip: register for it even if you can't join live. You'll get the recording afterwards 😉   You’ll learn: ✅ ETL vs. ELT and how to choose the right one for your use case ✅ Common Airflow DAG patterns that work well in production ✅ Key Airflow 3.0 features like dynamic task mapping, assets, and event-driven scheduling ✅ How to make DAGs dynamic in a scalable way (finally!)   I think this is going to be super helpful, especially for those of you working on complex pipelines or trying to scale your workflows across teams.   What's your biggest struggle when building ETL/ELT pipelines? 

Ähnliche Seiten

Jobs durchsuchen