Dear Reader,
Welcome to the March 18th issue of our newsletter!
Announcements
We’re excited to announce the official relaunch of the Data For Science website!
At Data4Sci, our goal has always been to bridge the gap between complex data and actionable intelligence. Our revamped site makes it easier than ever to explore how we help teams build reliable, production-ready AI—from RAG and agentic workflows to comprehensive LLM strategy.
Check out the new experience here: 👉 https://data4sci.com/
Whether you're looking for expert consulting, technical training, or our latest deep-dives into AI, we’ve built this for you.
This week’s reading captures a field that is getting sharper at every layer, from theory to infrastructure: one piece makes modern attention mechanisms feel newly legible by showing how long-context models are really a series of engineering trade-offs around memory, compute, and quality, while another offers a much-needed Bayesian reset for practitioners who know the formulas but still want a more intuitive grip on uncertainty and updating beliefs.
Add in a clean walkthrough of transformer architecture, a striking example of what happens when an autonomous research agent can suddenly run hundreds of experiments in parallel on serious GPU infrastructure, and you get the bigger story: machine learning is now about better mental models for reasoning, experimentation, and scale.
Even the more foundational selections reinforce that theme, whether by explaining why bell curves emerge so reliably from messy real-world variation, spotlighting the researchers whose work helped create quantum information science, or pointing toward a near future where software agents transact through open protocols designed for machine-native payments.
This week's set of papers sketches a fascinating tension at the heart of AI right now: we are building systems that look increasingly capable, even intellectually companionable, while still wrestling with the deeper question of whether they truly learn in the way humans do. One essay pushes that tension into the real world by asking whether AI may soon replace some forms of graduate-level research labor.
At the same time, another warns that as these systems become more fluent and persuasive, people may begin outsourcing not just tasks but judgment itself. Around that debate, the more technical papers widen the frame: transformers are recast in probabilistic terms, neural cellular automata hint at radically different ways to train language systems, and new work on autonomous learning argues that today’s models remain far from the flexible, self-directed intelligence suggested by the hype. Even the seemingly distant contributions fit the same larger story, whether by modeling the flow of commodities as dynamic fields or by exploring how thought might be translated across brains through shared linguistic representations.
AI is forcing us to rethink learning, reasoning, communication, and the boundaries between human cognition and the machines that increasingly mirror it.
Our current book recommendation is "Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems" by A. Gullí. You can find all the previous book reviews on our website. In this week's video, we have a tutorial on Complex AI Agents with Python.
Data shows that the best way for a newsletter to grow is by word of mouth, so if you think one of your friends or colleagues would enjoy this newsletter, go ahead and forward this email to them. This will help us spread the word!
Semper discentes,
The D4S Team
A. Gullí’s "Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems" feels like a timely guide for data scientists and machine learning engineers who are ready to move past the hype around AI agents and focus on how these systems are actually built. What makes the book stand out is its practical, pattern-based approach: instead of treating agents like magic, Gullí breaks them into reusable design ideas that help readers think more clearly about architecture, workflows, and implementation. That alone makes it more valuable than many AI books that are heavy on buzzwords and light on substance.
One of the book’s strongest qualities is its hands-on mindset. By working through recognizable frameworks and concrete design patterns, it gives technical readers a clearer path from experimentation to real system design. For ML engineers, that means a stronger grasp of modularity and maintainability; for data scientists, it offers a useful bridge between model knowledge and application building. The book is at its best when it helps readers see agentic systems not as mysterious novelties, but as engineering problems that can be approached systematically.
Its weaknesses are relatively minor but worth noting. Because it leans on current frameworks and tools, some parts may age quickly in such a fast-moving field, and readers looking for a deeper dive into evaluation, benchmarking, or production-scale operations may find it less comprehensive on those fronts. Still, Agentic Design Patterns sounds like the kind of book that can sharpen how technical practitioners think about intelligent systems—and for many readers, that will be reason enough to keep turning the pages.
- A Visual Guide to Attention Variants in Modern LLMs [magazine.sebastianraschka.com]
- Bayesian statistics for confused data scientists [nchagnet.pages.dev]
- The Transformers [www.vizuaranewsletter.com]
- Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster [blog.skypilot.co]
- The Math That Explains Why Bell Curves Are Everywhere [quantamagazine.org]
- 2025 Turing Award for Quantum Information Science [awards.acm.org]
- Out-of-Context Reasoning in LLMs: A short primer and reading list [outofcontextreasoning.com]
- Introducing the Machine Payments Protocol [stripe.com]
- Why I may ‘hire’ AI instead of a graduate student (A. Rosenfeld)
- A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations (Z. Zada, A. Goldstein, S. Michelmann, E. Simony, A. Price, L. Hasenfratz, E. Barham, A. Zadbood, W. Doyle, D. Friedman, P. Dugan, L. Melloni, S. Devore, A. Flinker, O. Devinsky, S. A. Nastase, U. Hasson)
-
Thinking Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender (S. D. Shaw, G. Nave)
- Vector fields as a framework for modelling the mobility of commodities (S. Farokhnejad, A. S. da Mata, M. Macedo, R. Menezes)
-
Transformers are Bayesian Networks (G. Coppola)
-
Why AI systems don't learn and what to do about it: Lessons on autonomous learning from cognitive science (E. Dupoux, Y. LeCun, J. Malik)
-
Training Language Models via Neural Cellular Automata (D. Lee, S. Han, A. Kumar, P. Agrawal)
Complex AI Agents with Python
All the videos of the week are now available in our YouTube playlist.
Upcoming Events:
Opportunities to learn from us
On-Demand Videos:
Long-form tutorials
- Natural Language Processing 7h, covering basic and advanced techniques using NTLK and PyTorch.
- Python Data Visualization 7h, covering basic and advanced visualization with matplotlib, ipywidgets, seaborn, plotly, and bokeh.
- Times Series Analysis for Everyone 6h, covering data pre-processing, visualization, ARIMA, ARCH, and Deep Learning models.
|