Tag: supabase

  • Is Gemini File Search Actually a Game-Changer?



    Date: 11/14/2025

    Watch the Video

    his week I watched a deep dive on Gemini File Search, and despite all the hype (“RAG killer!”), the reality is more grounded. It is useful, but not magic, and definitely not replacing real RAG systems anytime soon.

    At its core, Gemini File Search is Google’s fully managed RAG pipeline — you upload files, it chunks them, embeds them, stores them, and then uses those vectors to ground responses. No Pinecone, no pgvector, no Supabase storage. Just upload and query.

    Why the hype?

    The pricing. Storage is free, embeddings are cheap, and inference depends on whatever Gemini model you choose. Compared to OpenAI’s storage fees, Google positioned this aggressively.

    But once you look under the hood, several important realities show up:


    1. You Still Need a Data Pipeline

    The “upload a PDF in the browser and start chatting” demo is great… for demos.

    Real systems bring thousands of documents, handle updates, prevent duplicates, and maintain a clean knowledge base. Gemini does zero dedupe. Upload a file three times and you’ll get three identical chunks polluting your search results.

    So you still need a pipeline for:

    • file hashing

    • uniqueness checks

    • update detection

    • record management

    • scheduled ingestion

    Gemini simplifies the vector work, but not the actual operational work.


    2. Mid-Range RAG, Black-Box Internals

    The system is better than naïve RAG, but missing higher-end tools like:

    • hybrid search

    • contextual embeddings

    • re-ranking

    • multimodal chunk-level reasoning

    • structured retrieval for tables/spreadsheets

    You also can’t see inside what it’s doing. When responses degrade, you’re stuck. There’s no tuning, no custom chunking, no reranking.

    Good for simple use cases. Wrong tool once you hit complexity.


    3. Basic OCR, Basic Chunking, No Markdown

    The good:

    • OCR works and is fast

    • It handles non-machine-readable PDFs

    The downside:

    • No markdown structure

    • Headings lost

    • Chunk boundaries often split sentences

    • Coarse chunking hurts accuracy

    For anyone who relies on structured chunking (and most serious RAG setups do), this is a limitation.


    4. Metadata Is Harder Than It Should Be

    Gemini doesn’t let you fetch all chunks of a processed document. That makes real metadata extraction hard, since you can’t reconstruct the content after upload.

    To add rich metadata, you need a second text-extraction pipeline… which defeats much of the “fully managed” promise.

    A simple “fetch all chunks for doc X” endpoint would solve this problem overnight.


    5. Vendor Lock-In & Data Residency

    All data sits with Google. If you care about:

    • privacy

    • PII

    • GDPR

    • on-prem requirements

    …you’re living inside their walls.

    And you can only use Gemini models with Gemini File Search. No mixing ecosystems. No swapping out the model later.


    Verdict

    Gemini File Search is RAG as a service, not a RAG killer. It’s not new — OpenAI and others already offer similar pipelines — but the pricing and simplicity are compelling. For light to mid-level use cases, it’s a great on-ramp.

    But the moment you need:

    • full control

    • advanced retrieval techniques

    • transparency

    • structured pipelines

    • guaranteed accuracy

    …you’ll eventually have to replatform.

    Still — it’s a strong option for fast prototyping or small-to-medium business workflows where simplicity wins.

  • We made Supabase Auth way faster!

    News: 2025-07-26



    Date: 07/26/2025

    Watch the Video

    I’ve always found that user authentication can be a hidden performance bottleneck in web apps. This Supabase update directly tackles that issue by introducing JWT Signing Keys. The core idea is simple but powerful: instead of your app making a slow network request back to Supabase to verify a user, it can now validate their session token instantly on its own. This is a massive performance win for building snappier, more responsive applications and APIs.

    For anyone building tools or automated workflows on a Supabase backend, this means less waiting and more reliable, faster execution. It’s a perfect example of a sophisticated architectural improvement that removes friction and helps small teams build products that feel incredibly fast. The video does a great job of walking through not just the “why,” but the practical steps to implement it.

  • We made Supabase Auth way faster!

    News: 2025-07-26



    Date: 07/26/2025

    Watch the Video

    I’ve always been wary of the performance hit that comes with server-side authentication. Each check can mean a slow network round-trip, which really adds up. But with Supabase’s new JWT Signing Keys, everything changes. Now, you can validate user sessions locally within your own application. This is a huge performance win! It eliminates that auth-related network latency, making for a much snappier user experience. It’s a great example of a modern, edge-friendly solution to a common bottleneck that typically slows down our apps.

    To see this in action, check out the video by Jon Meyers. He walks through the entire process, from enabling the feature to refactoring a Next.js app to take full advantage of it.

  • We made Supabase Auth way faster!



    Date: 07/25/2025

    Watch the Video

    Okay, this video on Supabase JWT signing keys is definitely worth checking out, especially if you’re like me and trying to level up your development game with AI and automation. In a nutshell, it shows how to switch your Supabase project to use asymmetric JWTs with signing keys, letting you validate user JWTs client-side instead of hitting the Supabase Auth server every time. The demo uses a Next.js app as an example, refactoring the code to use getClaims instead of getUser and walking through enabling the feature and migrating API keys. It also touches on key rotation and revocation.

    Why is this so relevant for us? Well, imagine you’re building an AI-powered app that relies heavily on user authentication. Validating JWTs server-side becomes a bottleneck, impacting performance. This video provides a clear path to eliminating that bottleneck. We can use this approach not only for web apps but also adapt it for serverless functions or even integrate it into our AI agents to verify user identity and permissions locally. It will help improve performance and reduce dependence on external services, and in turn that will speed up our entire development/deployment cycles.

    What I find particularly exciting is the potential for automation. The video mentions a single command to bootstrap a Next.js app with JWT signing keys. Think about integrating this into your CI/CD pipeline or using an LLM to generate the necessary code snippets for other frameworks. Faster authentication means faster feedback loops for users, and less dependency on external validation. It’s a small change that can yield huge performance and efficiency gains, and that makes it absolutely worth experimenting with.

  • The new way to do Auth Keys in Supabase



    Date: 07/14/2025

    Watch the Video

    Okay, this Supabase video about JWT Signing Keys and API Keys is seriously worth checking out. It’s all about improving security and performance, which is music to my ears as I dive deeper into AI-driven workflows. Essentially, they’re replacing the old Anon and Service Role keys with more granular API keys and introducing asymmetric JWTs. This means your app can verify users locally without hitting the Supabase Auth Server every time, which is huge for speed.

    Why is this valuable for someone like me transitioning into AI coding and no-code? Well, think about it: many AI-powered apps need secure and fast authentication. These changes streamline that process. I can see using these API keys to lock down specific microservices or AI agents, ensuring they only access what they’re supposed to. Plus, the JWT signing keys mean I can potentially offload authentication logic to the edge, further improving response times for AI-driven features.

    Honestly, this video is inspiring because it highlights how traditional backend bottlenecks can be solved with smart architectural changes. Experimenting with these new Supabase features feels like a natural extension of my AI/no-code journey, allowing me to build more secure, scalable, and performant AI-powered applications. I am thinking of ways I can use this with llama-index and langchain.net. Definitely worth a weekend project!

  • My new FAVORITE way to use Supabase



    Date: 06/04/2025

    Watch the Video

    Okay, this Supabase MCP Server video is seriously cool, and here’s why I think it’s worth your time. It shows how to give your AI agent deep context about your Supabase project, essentially letting it “understand” your backend in the same way it groks file structures and code. Jon Meyers walks through setting up the Supabase MCP server within Cursor IDE and then uses Claude to whip up an Edge Function that intelligently scrapes recipe websites. Forget the ad-ridden, SEO-spam versions – this pulls out just the core recipe data and stores it in a Postgres database, then displays it in a Next.js app.

    The real value for us, as developers moving towards AI-assisted workflows, is how it streamlines development and automation. Imagine the possibilities! Instead of manually writing complex scrapers and data cleaning scripts, you can leverage AI to handle that heavy lifting. I’ve spent countless hours wrestling with web scraping in the past (and honestly, who hasn’t?), so seeing this level of automation makes me genuinely excited. This isn’t just about scraping recipes; it’s about connecting AI to your database schema, table relationships, and even your custom functions, allowing it to assist in tasks you hadn’t even imagined.

    I’m already brainstorming ways to apply this to our internal tools and client projects. Think automated data migrations, intelligent report generation, or even AI-powered API development. This video gives a practical, hands-on example of how to bridge the gap between LLMs and real-world development tasks. The combination of Supabase’s backend capabilities and AI coding tools like Claude could seriously boost productivity and unlock new levels of automation, it’s definitely worth experimenting with.

  • Self-Host Supabase Edge Functions



    Date: 06/02/2025

    Watch the Video

    Okay, this video on Supabase Edge Functions on Fly.io is gold for any of us transitioning to AI-driven workflows. Jon Meyers walks through deploying a Supabase Edge Function, essentially a serverless function, on Fly.io using Deno and Oak middleware. This means we can ditch some of the heavier backend lifting and focus on orchestrating logic with tools like LLMs.

    Why’s it valuable? Because it showcases how to self-host these functions, giving us control and flexibility. Instead of being tied to a specific cloud provider’s serverless platform, we can deploy these lightweight functions anywhere, including environments where we’re integrating AI agents or no-code solutions. Imagine using an LLM to generate the core logic within the Edge Function, then deploying it to a cost-effective and scalable platform like Fly.io, orchestrated entirely by AI. We can have AI write the function, write the tests, and orchestrate the deployment!

    The real-world application is huge. Think automated content generation, dynamic API endpoints, or even real-time data transformation triggered by AI models. By experimenting with this, we’re not just learning about Supabase or Fly.io; we’re building a foundation for a whole new level of automation and intelligent applications. It’s definitely worth carving out an hour to play with!

  • Complete Guide ⚡️ Supabase Self-Hosted ➕ Custom S3 ➕ Authelia



    Date: 06/02/2025

    Watch the Video

    Okay, this video on self-hosting Supabase with S3 storage, custom domains, and Authelia is exactly the kind of thing I’m diving into! It’s a walkthrough of setting up a complete backend infrastructure, and what’s killer is the focus on self-hosting. We’re talking full control, reducing reliance on external services, and potentially big cost savings down the road. It’s not just about slapping together a quick prototype; it’s about building a robust, production-ready environment.

    What makes this video inspiring is that it bridges the gap between traditional backend setups and the newer, “serverless” world that Supabase offers. The inclusion of Authelia for authentication shows a real-world security mindset. We, as devs, can leverage the techniques shown here to move away from the complexity of frameworks such as Laravel and use Supabase as BaaS and build an entire scalable app using Vue, React or Svelte (my favourite). The video even acknowledges some initial hiccups (which the author immediately fixed), it adds a layer of authenticity.

    I’m already thinking about how I can use this setup for a client project where data sovereignty and control are paramount. Instead of relying on a managed Supabase instance, I can deploy this on a Hetzner or DigitalOcean server, giving the client complete ownership of their data. This video is a must-watch for any developer looking to level up their backend game and explore the power of self-hosted solutions, and the mentioned hiccups only add credibility. I’m going to experiment with this over the weekend.

  • Supabase Edge Functions Just Got Way Easier



    Date: 05/12/2025

    Watch the Video

    Okay, so this Supabase video is a game-changer for anyone diving into serverless functions. It basically shows you how to create, test, and even edit with AI your Supabase Edge Functions, all directly from their dashboard. No more complex CLI setups or wrestling with configurations – it’s all visual and streamlined. As someone who’s been trying to blend traditional PHP/Laravel with AI-assisted development, this hits the sweet spot.

    Why’s it valuable? Because it drastically lowers the barrier to entry for using Edge Functions. Think about it: you could use this for things like image optimization on upload, real-time data transformations, or even custom authentication logic – all triggered at the edge, closer to the user. The AI editing feature is what really caught my eye. Imagine describing what you want the function to do, and the AI generates the code, then you fine tune from there. It can be like pair programming with an AI assistant.

    For me, this is worth experimenting with because it aligns perfectly with automating repetitive tasks and boosting productivity. We can focus more on the business logic and less on the infrastructure plumbing. Plus, the fact that it’s all within the Supabase ecosystem makes it even more appealing. It makes me wonder how many custom PHP scripts I have running that could be streamlined using serverless functions edited by an AI, it would be a significant improvement.

  • We improved Supabase AI … A lot!



    Date: 05/02/2025

    Watch the Video

    Okay, so this video with the Supabase AI assistant, where “John” builds a Slack clone using only AI prompts, is seriously inspiring. It’s a clear demonstration of how far AI-assisted development has come. We’re talking about things like schema generation, SQL debugging, bulk updates, even charting – all driven by natural language. For someone like me who’s been wrestling with SQL and database design for ages, the idea of offloading that work to an AI while I focus on the higher-level logic is a game-changer.

    What really stands out is seeing these AI tools applied to a practical scenario. Instead of just theoretical possibilities, you’re watching someone build something real – a Slack clone. Think about the implications: instead of spending hours crafting complex SQL queries for data migrations, you could describe the desired transformation in plain English and let the AI handle the syntax. Or imagine generating different chart types to visualize database performance with a single prompt! This isn’t just about saving time; it’s about unlocking a level of agility and experimentation that was previously out of reach.

    Honestly, seeing this makes me want to dive in and experiment with Supabase’s AI assistant ASAP. I can envision using it to rapidly prototype new features, explore different data models, and even automate tedious database administration tasks. Plus, debugging SQL is one of those tasks that every developer loves to hate. I really recommend giving it a try, because you’ll start to notice other tasks you could offload. It feels like we’re finally getting to a point where AI isn’t just a buzzword, but a genuine force multiplier for developers.