Self-updating graphs, interoperable commerce, retrieval-first memory, and stable Transformers. Good morning, AI enthusiasts, A lot of the best progress in AI right now isn’t flashy. It’s structural. It’s the quiet work of turning messy information into systems you can query, automation you can trust, and architectures that don’t require brute-force scaling to get better. That’s the thread running through this week’s issue. You’ll see how meeting notes can become a self-updating knowledge graph you can actually interrogate, why an open commerce protocol might let AI assistants interact with retailers without brittle one-off integrations, and what neuroscience can teach us about memory as a retrieval problem rather than a storage problem. We also include a hands-on build of an image-based re…
Self-updating graphs, interoperable commerce, retrieval-first memory, and stable Transformers. Good morning, AI enthusiasts, A lot of the best progress in AI right now isn’t flashy. It’s structural. It’s the quiet work of turning messy information into systems you can query, automation you can trust, and architectures that don’t require brute-force scaling to get better. That’s the thread running through this week’s issue. You’ll see how meeting notes can become a self-updating knowledge graph you can actually interrogate, why an open commerce protocol might let AI assistants interact with retailers without brittle one-off integrations, and what neuroscience can teach us about memory as a retrieval problem rather than a storage problem. We also include a hands-on build of an image-based recommendation and search engine with embeddings + Elasticsearch, and a deeper look at a Transformer architecture upgrade designed to improve training stability and performance without simply scaling up compute. Let’s get into it. What’s AI Weekly https://medium.com/media/55476cc31394d8e746332a0dcf45f7d2/href This week, in What’s AI , I am addressing something we should have left behind in 2025, but didn’t: overengineering our AI agents. I break down the what, how, and when of workflows, single agents, and multi-agents. You will understand how to decide what level of autonomy and complexity your application requires. Watch the video on YouTube or read the complete article here . — Louis-François Bouchard, Towards AI Co-founder & Head of Community February Cohort Kicks Off Feb 1: Your Exit From the LLM Hype Room If you’re done with demos that don’t turn into real skill, start this cycle with a clear build sequence and a checklist that tells you what “ready” actually means. Kickoff is Feb 1, 2026 (in 72 hours). The link is included in your welcome email when you enroll in any Towards AI course . Want the quickest exit from demo-land? Start with the 10-Hour Crash Course → Expert LLM Developer Bundle : a single, sequenced track that takes you from LLM fundamentals to production discipline and full-stack execution, so you’re not stitching your learning together from random threads. It combines our bestselling production book with our most adopted courses and deployed frameworks. Enroll here (includes cohort access)! Learn AI Together Community Section! Featured Community post from the Discord Belocci has created UniTrainer, a desktop application that provides a modern GUI for training computer vision and machine learning models locally or on cloud GPUs, without command-line workflows, environment headaches, or fragmented tooling. It is designed for builders, students, and researchers who want to focus on ideas and data. Check it out on GitHub and support a fellow community member. If you have a question or feedback, share it in the thread ! AI poll of the week! Most of you don’t attend AI conferences, which is fair; travel, time, and cost add up. But I still think it matters because good conferences can compress months of progress into days: small-room workshops, real hiring convos, and serendipitous intros you won’t get from a livestream. But I am curious: if you don’t go to conferences, where do you actually network? Drop your go-tos in the thread . You can also share your city, role, and one space you want to learn/build in, so you can meet the right people. Collaboration Opportunities The Learn AI Together Discord community is flooding with collaboration opportunities. If you are excited to dive into applied AI, want a study partner, or even want to find a partner for your passion project, join the collaboration channel ! Keep an eye on this section, too — we share cool opportunities every week! 1. M.zalt is looking for a business partner to build & launch an AI platform. If you have some experience in BD, sales, or marketing, this might be a great opportunity for you. Find more details in the thread ! 2. Ankush09x is building a system that converts spoken English into visual Pitman shorthand symbols and is looking to collaborate with someone who can help with a linguistics + rendering problem. If this sounds relevant, connect with him in the thread ! 3. Beepboop003 is looking for someone who wants to work on AI content creation projects together. If you want to explore this space, reach out to them in the thread ! Meme of the week! Meme shared by bin4ry_d3struct0r TAI Curated Section Article of the week Building a Self-Updating Knowledge Graph From Meeting Notes With LLM Extraction and Neo4j By Cocoindex This article shows how to build the pipeline for a self-updating knowledge graph that extracts actionable intelligence from static meeting notes. The system uses CocoIndex to monitor Google Drive, employing an LLM to extract structured data like attendees, tasks, and decisions from Markdown files. This information is then organized and stored in a Neo4j graph database. A central feature is its incremental processing, which updates the graph only when source documents change, improving efficiency. The result is a queryable resource that enables users to perform complex, relationship-based searches across all meeting-related information. Our must-read articles 1. Google Just Launched a Protocol That Could Change E-Commerce Forever By Gowtham Boyina Google has introduced the Universal Commerce Protocol (UCP), an open-source standard designed to address the fragmented e-commerce landscape. This protocol aims to eliminate the need for custom integrations between platforms and merchants by providing a common language for transactions. The article outlines how UCP uses modular capabilities, such as checkout and order management, allowing AI agents to dynamically discover and interact with retailer systems. It also notes the protocol’s architecture, security features, and competition from other agentic commerce systems. 2. Key Value Memory In The Brain By Edgar Bermudez This analysis presents memory not as a simple storage system but as a key-value mechanism, arguing that many memory failures stem from retrieval issues rather than information decay. The model separates memory into “keys” (addresses optimized for search) and “values” (content optimized for fidelity). This framework provides a computational explanation for memory consolidation, suggesting that the hippocampus creates keys while the neocortex stores values. Over time, the cortex develops its own keys, reducing hippocampal dependence. This perspective reframes memory as a search problem and connects established neuroscience theories with the architecture of modern AI attention mechanisms. 3. Building an Image-Based Recommendation System and Search Engine with Deep Learning and Elasticsearch By Carmel Wenga This guide demonstrates how to build an image-based recommendation system using the ResNet50 model and Elasticsearch. It details the process of extracting image embeddings with Keras/TensorFlow and storing them within a Dockerized Elasticsearch environment. The system then performs k-Nearest Neighbors (kNN) searches to identify and recommend visually similar products. The guide also shows how to adapt the same architecture to function as an image-based search engine, providing a complete implementation framework. 4. DeepSeek’s mHC Breakthrough: How Fixing Transformers Could End the AI Scaling Era By Shreyansh Jain Instead of relying on brute-force scaling, DeepSeek has focused on architectural improvements for AI models. The research addresses instability in expressive designs such as Hyper-Connections (HC) by introducing Manifold-Constrained Hyper-Connections (mHC). This method applies structural rules to guide information flow, preserving the flexibility of multi-stream designs while preventing instability. In large-scale experiments, mHC demonstrated more predictable training and outperformed both standard Transformers and HC on performance benchmarks, especially in reasoning. The work suggests architectural innovation may be a more efficient path for advancing AI than simply increasing model size. If you are interested in publishing with Towards AI, check our guidelines and sign up . We will publish your work to our network if it meets our editorial policies and standards. LAI #112: Beyond Bigger Models was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.