Category: Try

  • Manage secrets and query third-party APIs from Postgres



    Date: 05/02/2025

    Watch the Video

    This Supabase video about Foreign Data Wrappers (FDW) is a game-changer for any developer looking to streamline their data workflows. In essence, it shows you how to directly query live Stripe data from your Supabase Postgres database using FDWs and securely manage your Stripe API keys using Supabase Vault. Why is this so cool? Imagine being able to run SQL aggregates directly on your Stripe data without having to build and maintain separate ETL pipelines!

    For someone like me who’s been diving deep into AI-enhanced workflows, this video is pure gold. It bridges the gap between complex data silos and gives you the power to access and manipulate that data right within your existing database environment. Think about the possibilities for building automated reporting dashboards, triggering custom logic based on real-time Stripe events, or even training machine learning models with up-to-date financial data. Plus, the integration with Supabase Vault ensures that your API keys are securely managed, which is paramount in any data-driven application.

    This approach could revolutionize how we handle real-world development and automation tasks. Instead of writing custom code to fetch and process data from external APIs, you can simply use SQL. And, let’s be honest, who doesn’t love writing SQL? I’m definitely going to experiment with this. The time saved by not having to build separate data integration pipelines and increased agility from having direct access to Stripe data within Postgres are huge wins!

  • Suna: FULLY FREE Manus Alternative with UI! Generalist AI Agent! (Opensource)



    Date: 05/01/2025

    Watch the Video

    Okay, so this video introduces Suna AI, which is pitched as an open-source, fully local AI agent. It’s positioned as a direct competitor to commercial offerings like Manus and GenSpark AI, but with the significant advantages of being free and having a clean, ready-to-use UI. The video walks through setting it up with Docker, Supabase (for the backend), and integrating LLM APIs like Anthropic Claude via LiteLLM. It even covers how to use Daytona for easier environment provisioning, which is super helpful.

    Why is this interesting for us as developers moving into AI-enhanced workflows? Well, the promise of a powerful, fully local AI agent is huge. I’ve been increasingly focused on bringing AI capabilities closer to the metal for better control, privacy, and cost efficiency. Suna AI seems to tick all those boxes. Imagine having an AI assistant that you can tweak, customize, and integrate deeply into your existing systems without relying on external APIs or worrying about data privacy. Plus, the video highlights real-world use cases like data analysis and research, which are exactly the kind of tasks I’m looking to automate and improve.

    For me, the biggest draw is the control and flexibility. I’m tired of being locked into proprietary platforms with limited customization options. The idea of having a fully local, open-source AI agent that I can mold to my specific needs is incredibly appealing. Experimenting with Suna could lead to creating custom tools for code generation, automated testing, or even client communication. It’s definitely worth checking out and seeing how it can fit into my AI-enhanced development workflow.

  • NEW! OpenAI’s GPT Image API Just Replaced Your Design Team (n8n)



    Date: 04/30/2025

    Watch the Video

    Okay, this video is seriously inspiring for anyone diving into AI-powered development! It’s all about automating the creation of social media infographics using OpenAI’s new image model, news scraping, and n8n. The workflow they build takes real-time news, generates engaging posts and visuals, and even includes a human-in-the-loop approval process via Slack before publishing to Twitter and LinkedIn. I think this is really cool.

    Why is this valuable? Well, we’re talking about automating content creation end-to-end! As someone who’s been spending time figuring out how to use LLMs to streamline my workflows, this hits all the right notes. Imagine automatically turning blog posts into visual assets, crafting unique images for each article, and keeping your social media feeds constantly updated with zero manual effort – that’s the time savings we need and that translates into direct business value.

    The cool part is the integration with tools like Slack for approval, plus the ability to embed these AI-generated infographics into blog posts. This moves beyond basic automation and shows how to orchestrate complex, AI-driven content pipelines. I think it’s worth experimenting with because it showcases a tangible, real-world application of AI. It also presents a solid framework for building similar automations tailored to different content types or platforms. I can envision using this approach to generate marketing materials or even internal documentation for my projects, further decreasing time spent on manual tasks.

  • Two NEW n8n RAG Strategies (Anthropic’s Contextual Retrieval & Late Chunking)



    Date: 04/29/2025

    Watch the Video

    Okay, this video is gold for anyone, like me, diving deep into AI-powered workflows! Basically, it tackles a huge pain point in RAG (Retrieval-Augmented Generation) systems: the “Lost Context Problem.” We’ve all been there, right? You ask your LLM a question, it pulls up relevant-ish chunks, but the answer is still inaccurate or just plain hallucinated. This video explains why that happens and, more importantly, offers two killer strategies to fix it: Late Chunking and Contextual Retrieval.

    Why is this video so relevant for us right now? Because it moves beyond basic RAG implementations. It directly addresses the limitations of naive chunking methods. The video introduces using long-context embedding models (Jina AI) and LLMs (Gemini 1.5 Flash) to maintain and enrich context before and during retrieval. Imagine being able to feed your LLM more comprehensive and relevant information, drastically reducing inaccuracies and hallucinations. The presenter implements both techniques step-by-step in N8N, which is fantastic because it gives you a practical, no-code (or low-code!) way to experiment.

    Think about the possibilities: better chatbot accuracy, more reliable document summarization, improved knowledge base retrieval… all by implementing these context-aware RAG techniques. I’m especially excited about the Contextual Retrieval approach, leveraging LLMs to add descriptive context before embedding. It’s a clever way to use AI to enhance AI. I’m planning to try it out in one of my client’s projects to make our support bot more robust. Definitely worth the time to experiment with these workflows.

  • Introducing the GitHub MCP Server: AI interaction protocol | GitHub Checkout



    Date: 04/28/2025

    Watch the Video

    Okay, so this GitHub Checkout video about the MCP (Machine Communication Protocol) Server is exactly the kind of thing that gets me excited about the future of coding. Basically, it’s about creating a standard way for AI assistants to deeply understand and interact with your GitHub projects – code, issues, even your development workflow. Think about it: instead of clunky integrations, you’d have AI tools that natively speak “GitHub,” leading to smarter code suggestions, automated issue triage, and maybe even AI-driven pull request reviews.

    For someone like me who’s actively shifting towards AI-enhanced development, this is huge. Right now, integrating AI tools can feel like hacking solutions together, often requiring a lot of custom scripting and API wrangling. A unified protocol like MCP promises to streamline that process, allowing us to focus on the actual problem-solving instead of the plumbing. Imagine automating tedious tasks like code documentation or security vulnerability checks directly within your GitHub workflow, or having an AI intelligently guide new team members through a complex project.

    Honestly, this feels like a foundational piece for the next generation of AI-powered development. I’m planning to dive into the MCP Server, experiment with building some custom integrations, and see how it can be applied to automate parts of our CI/CD pipeline. It’s open source, which is awesome, and the potential for truly intelligent AI-assisted coding is just too compelling to ignore.

  • Did Docker’s Model Runner Just DESTROY Ollama?



    Date: 04/28/2025

    Watch the Video

    Okay, this video is seriously worth a look if you’re like me and trying to weave AI deeper into your development workflow. It basically pits Docker against Ollama for running local LLMs, and the results are pretty interesting. They demo a Node app hitting a local LLM (Smollm2, specifically) running inside a Docker container and show off Docker’s new AI features like the Gordon AI agent.

    What’s super relevant is the Gordon AI agent’s MCP (Multi-Container Placement) support. Think about it: deploying and managing complex AI services that need multiple containers (like microservices, but for AI) can be a real headache. This video shows how Docker Compose makes it relatively painless to spin up MCP servers, something that could simplify a lot of the AI-powered features we’re trying to bake into our applications.

    Honestly, I’m digging the idea of using Docker to manage my local AI models. Containerizing everything just makes sense for consistency and portability. It’s a compelling alternative to Ollama, especially if you’re already heavily invested in the Docker ecosystem. I’m definitely going to play around with the Docker Model Runner and Gordon to see if it streamlines my local LLM experiments and how well it plays with my existing Laravel projects. The ability to version control and easily share these AI-powered environments with the team is a HUGE win.

  • How Supabase Simplifies Your Database Management with Declarative Schema



    Date: 04/28/2025

    Watch the Video

    Okay, this Supabase video on declarative schema is seriously interesting, especially for how we’re trying to integrate AI into our workflows. It tackles a common pain point: managing database schemas. Instead of scattered migration files, you get a single source of truth, a declarative schema file. Supabase then automatically updates your migration files based on this schema. Think of it as Infrastructure as Code, but for your database – makes versioning, understanding, and, crucially, feeding schemas into LLMs way easier.

    Why is this valuable? Well, imagine using an LLM to generate complex queries or even suggest schema optimizations. Having a single, well-defined schema file makes that process infinitely smoother. Plus, the video shows how it handles views, functions, and Row Level Security (RLS) – all essential for real-world applications. We could potentially automate a lot of schema-related tasks, like generating documentation or even suggesting security policies based on the schema definition.

    For me, the “single source of truth” aspect is the biggest draw. We’re moving towards using AI to assist with database management, and having a clean, declarative schema is the foundation for that. I’m definitely going to experiment with this, especially on projects where we’re leveraging LLMs for data analysis or AI-powered features. It’s worth it just to streamline schema management, but the potential for AI integration is what makes it truly exciting.

  • Effortless RAG in n8n – Use ALL Your Files (PDFs, Excel, and More)



    Date: 04/28/2025

    Watch the Video

    Alright, so this video is all about leveling up your RAG (Retrieval-Augmented Generation) pipelines in n8n to handle more than just plain text. It tackles the common problem of dealing with different file types like PDFs and Excel sheets when building your knowledge base. The creator walks you through a workflow to extract text from these files, which n8n doesn’t natively support with a single node.

    This is super valuable for anyone like me diving into AI-enhanced workflows. One of the biggest hurdles I’ve faced is getting data into the system. We often have project requirements where the knowledge base isn’t just text files; it’s documentation, spreadsheets, PDFs, even scanned images. This video shows a practical, no-code/low-code approach to ingest those diverse file types, clean and transform them for use in LLMs. The link to the workflow and the Google MIME types are clutch!

    Imagine automating document processing for a client, extracting key data from reports or contracts, and feeding it into your LLM-powered chatbot or analysis tool. No more manual copy-pasting! The video’s approach of breaking down the extraction process and handling different file types really resonated with me. I am downloading this workflow right now and planning on applying a similar approach to process and extract information from scanned images using OCR and then load it into a vector database. Worth experimenting with? Absolutely! It’s about bridging the gap between raw data and intelligent applications, making our AI agents more versatile and effective.

  • Scrape Any Website for FREE & NO CODE Using DeepSeek & Crawl4AI! (Opensource)



    Date: 04/25/2025

    Watch the Video

    Okay, this video is definitely worth checking out, especially if you’re like me and trying to leverage AI for everyday development tasks. Essentially, it’s a walkthrough of how to use DeepSeek’s AI web crawler and Crawl4AI to scrape data from websites without writing a bunch of custom code. Think about it – how many times have you needed to pull data from a site but dreaded writing all the scraping logic? (I know, too many for me to count!)

    What’s cool is that this solution is open-source and, according to the video, relatively straightforward to set up. It walks you through forking the DeepSeek AI Web Crawler, using Crawl4AI for faster, asynchronous scraping, and then extracting the data in formats like Markdown, JSON, or CSV. The real kicker is being able to deploy your own public web scraper. We are no longer bound by the limitations of pre-built tools. Want to grab venue details, product info, blog content? It sounds like it can handle a variety of scraping tasks, which is super useful. This opens up opportunities for automated data collection, competitive analysis, and even content aggregation without the headache of traditional scraping.

    For someone transitioning into AI-enhanced workflows, this is a fantastic example of how AI can abstract away the tedious parts of development. Imagine the time saved by not having to hand-code scrapers for every website! Plus, the ability to output structured data directly is a huge win. The video mentions using Groq’s DeepSeek API, which suggests the AI is doing some heavy lifting in understanding and extracting the relevant information. Honestly, the promise of pasting a link and getting clean, structured data “in seconds” is enticing enough to give this a shot. I’m thinking this could be a game-changer for automating data-driven tasks and freeing up time to focus on more strategic development work.

  • This is Hands Down the BEST MCP Server for AI Coding Assistants



    Date: 04/24/2025

    Watch the Video

    Okay, this video on Context7 looks seriously cool, and it’s exactly the kind of thing I’ve been digging into lately. Essentially, it addresses a major pain point when using AI coding assistants like Cursor or Windsurf: their tendency to “hallucinate” or give inaccurate suggestions, especially when dealing with specific frameworks and tools. The video introduces Context7, an MCP server that allows you to feed documentation directly to these AI assistants, giving them the context they need to generate better code.

    Why is this valuable? Well, as someone knee-deep in migrating from traditional Laravel development to AI-assisted workflows, I’ve seen firsthand how frustrating those AI hallucinations can be. You end up spending more time debugging AI-generated code than writing it yourself! Context7 seems to offer a way to ground these AI assistants in reality by providing them with accurate, framework-specific documentation. This could be a game-changer for automating repetitive tasks, generating boilerplate code, and even building complex features faster. Imagine finally being able to trust your AI coding assistant to handle framework-specific logic without constantly double-checking its work.

    The idea of spinning up an MCP server and feeding it relevant documentation is really exciting. The video even shows a demo of coding an AI agent with Context7. I’m definitely going to experiment with this on my next Laravel project where I’m using a complex package. It’s worth trying because it tackles a very real problem, and the potential for increased accuracy and efficiency with AI coding is huge. Plus, the video claims you can get up and running in minutes, so it’s a low-risk way to potentially unlock a significant productivity boost.