Maksym Andriushchenko’s Post

📣 We are expanding our AI Safety and Alignment group at ELLIS Institute Tübingen and Max Planck Institute for Intelligent Systems! We have: - a great cluster at MPI with 50+ GB200s, 250+ H100s, and many-many A100 80GBs, - outstanding colleagues (Jonas Geiping, Sahar Abdelnabi, etc), - competitive salaries (as for academia), - fully English-speaking environment. In particular, I'm looking for: - one postdoc with a proven track record in AI safety, - PhD students with a strong computer science background and ideally experience in cybersecurity, interpretability, or training dynamics, - master’s thesis students (if you are already in Tübingen or can relocate to Tübingen for ~6 months), - remote mentees for the Summer 2026 MATS cohort (apply directly via the MATS portal). If you are interested, please fill out this Google form: https://lnkd.in/eXJYfJ8m. I will review every application and reach out if there is a good fit. Also, I'll be at NeurIPS in San Diego and would be glad to chat about these positions in person!

Janmajay Kumar may be you like something here

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories