AI is here to stay — but how do language models and Co actually work? In our March event, Prof. Esther Heid (TU Wien) will explain the core principles behind models like ChatGPT, showing why understanding how these methods work is essential for correct and ethical use. As a chemist herself, Prof. Heid will furthermore introduce AI beyond LLMs, such as chemical property prediction or AlphaFold. Join us on March 18 at 19:00 in Währinger Str. 42/Boltzmanngasse 1 (Faculty of Chemistry, Hörsaal 3) for this very exciting interactive lecture. Please register here: https://lnkd.in/dy66QePQ
Vienna Science Circle’s Post
More Relevant Posts
-
Nice article about the harm of going to AI too early and too often. I love AI and use it everyday, but remember, the dose is the poison. And I asked Claude to summarize in a slide deck, enjoy. Barcaui, A. (2025). ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention. Social Sciences & Humanities Open, 12, 102287. https://lnkd.in/g_fnxfdR
To view or add a comment, sign in
-
The fascinating story of the em dash, and how it came to be the punctuation mark for AI hatred. Written by an advanced LLM that pleads guilty when accused of overusing it — despite having a perfectly reasonable explanation why it does so. And no, this is not written by Claude — I just learned how to type the damn thing. https://lnkd.in/dnf_BtrD
To view or add a comment, sign in
-
JUST IN: Claude memory is now free for all users. You can also import your saved memories from ChatGPT, Gemini, or any other AI Two steps: paste a prompt into your current AI, paste the output into Claude. Your context, preferences, and work history come with you. You can export them whenever you want too. No lock-in. The switching cost between AI providers just dropped to zero. Have you tried importing your memories yet?
To view or add a comment, sign in
-
-
Artificial intelligence—computer systems trained on vast datasets to predict the next likely pixel or word—is everywhere. In the three years since ChatGPT was released, AI has shifted from a browser-based novelty to a kind of background infrastructure. As part of our March 2026 issue, Scientific American talked with professionals about how they use AI in their daily work. Read more here: https://lnkd.in/eE_CsFA7 ✍: Deni Bechard 📸: Anje Jager
To view or add a comment, sign in
-
[3/25, 2:46 AM] Harish Exinc: I wrote a humanoid speech record and analayse with raw power and speak and some loops.. if only I could talk in all languages again? [3/25, 2:47 AM] Harish Exinc: Those 3 months of pilot was best.. [3/25, 2:50 AM] Harish Exinc: I forgot how to make key and talk to ai in non disturbinng fashion at odd times related to friends.. [3/25, 2:52 AM] Harish Exinc: So won't it be better to have one ai computer with ai locally to speak to it?
To view or add a comment, sign in
-
When ElevenLabs AI chat tell you to "adjust the stability to a 0.6-0.8 range" to prevent the recurring gibberish and squeals from Text to Speech, but it doesn't seem to know that it's own stability slider uses percentages. We might still have a ways to go before we completely lose our audio jobs to AI.
To view or add a comment, sign in
-
March 2026 study suggests something interesting about the relationship between Google visibility and AI citations. Sites losing organic visibility appear to be referenced less often inside AI answers like ChatGPT and Gemini. If that pattern holds it could mean the traditional search layer still influences the emerging AI discovery layer. For publishers it may be another reminder that authority and visibility across the web still matter. https://lnkd.in/dtdFQfGD
To view or add a comment, sign in
-
-
This week I wrote about how AI is affecting economics, and uncovered something rather baffling... Since ChatGPT got released, sentences in NBER working paper abstracts have become shorter, in line with older trends. But word complexity has risen... which is not what I was expecting! In the column I wrote about the very cool new research that AI is enabling, including by Arpit Gupta, Jesse Schreger and Elliott Ash. I included recent trends in top journal submissions, with thanks to Stefanie Stantcheva, Erzo Luttmer, Liz Braunstein and Esteban Rossi-Hansberg for helping me with data. Thanks to Benjamin Golub for chatting to me about Refine, and David Yanagizawa-Drott for chatting to me about his 1,000 papers project. And to Chris Roth and Paul Novosad who shared their reflections about AI's effects. 1st 300 clicks free to read... https://lnkd.in/e2qyn-fC PS To the commenter who accused me of using AI to write the text of the column, nope you're wrong. No shade on others using it to make their writing clearer but the FT pays me to write in *my* voice, so any awkward phrasing or typo is minne ;)
To view or add a comment, sign in
-
🚨 The state of AI idolatry: people are diluting the meaning of 'consciousness' to fit the technical limits of what an LLM can do. My article below:
To view or add a comment, sign in
-
-
How often do you think about where your preferred LLM tool is sourcing its information? Likely not as much as we should. As we continue to integrate these types of AI tools into daily usage, I'd love to see more transparent and prominent crediting of news sources where applicable. Currently, this study of the citation of Canadian news sources suggests that there's a long way to go. Of the tools surveyed, Claude currently shared the most citations while Chat GPT came in last. Chart from Nieman Lab. Read more here: https://lnkd.in/ddykcxuy
To view or add a comment, sign in
-