Need help?
<- Back

Comments (245)

  • keithwhor
    Today MCP added Streamable HTTP [0] which is a huge step forward as it doesn't require an "always-on" connection to remote HTTP servers.However, if you look at the specification it's clear bringing the LSP-style paradigm to remote HTTP servers is adding a bunch of extra complexity. This is a tool call, for example: { "jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": { "name": "get_weather", "arguments": { "location": "New York" } } } Which traditionally would just be HTTP POST to `/get_weather` with `{ "location": "New York" }`.I've made the suggestion to remove some of this complexity [1] and fall back to just a traditional HTTP server, where a session can be negotiated with an `Authorization` header and we rely on traditional endpoints / OpenAPI + JSON Schema endpoint definitions. I think it would make server construction a lot easier and web frameworks would not have to materially be updated to adhere to the spec -- perhaps just adding a single endpoint.[0] https://spec.modelcontextprotocol.io/specification/2025-03-2...[1] https://github.com/modelcontextprotocol/specification/issues...
  • talles
    > Think of MCP like a USB-C port for AI applications.That analogy may be helpful for mom, but not for me as a software engineer.
  • bob1029
    I am really struggling with what the value-add is with MCP. It feels like another distraction in the shell game of contemporary AI tech.> MCP is an open protocol that standardizes how applications provide context to LLMs.What is there to standardize? Last I checked, we are using a text-to-text transformer that operates on arbitrary, tokenized strings. Anything that seems fancier than tokens-to-tokens is an illusion constructed by the marketing wizards at these companies. Even things like tool/function calling are clever heuristics over plain-ass text.> Currently, the MCP spec defines two kinds of servers, based on the transport mechanism they use: ...This looks like micro services crossed with AI. I don't think many are going to have a happy time at the end of this adventure.
  • tcdent
    Big question in my mind was if OpenAI was going to formally endorse this (since it was created by Anthropic) but we have our answer.MCP is now the industry standard for connecting LLMs to external tools.
  • jtrn
    I hoped OpenAI would support OpenAPI for connecting to tools. Having created a couple of MCP servers, it feels like a less flexible and worse documented API to me. I can’t really see anything that is made better by MCP over OpenAPI. It’s a little bit less code for a lot less options. Give it some time and it will also get Swagger built in.It’s solving a problem that was already robustly solved. So get we go with another standard.
  • samchon
    Can't I do function calling in OpenAPI? I also feel like MCP is reinventing the wheel.I have been converting OpenAPI documents into function calling schemas and doing tool calling since function calling first came out in 2023, but it's not easy to recreate a backend server to fit MCP.Also, these days, I'm making a compiler-driven function calling specialized framework, but I'm a little cautious about whether MCP will support it. It enables zero-cost tool calling for TypeScript classes based on the compiler, and it also supports OpenAPI.However, in the case of MCP, in order to fit this to the compiler-driven philosophy, I need to create a backend framework for MCP development first, or create an add-on library for a famous framework like NestJS. I can do the development, but there's so much more to do compared to OpenAPI tool calling, so it's a bit like that.
  • gronky_
    They’re all in. They announced they’ll add support for it in the desktop app and the API in the coming months: https://x.com/OpenAIDevs/status/1904957755829481737
  • simonw
    "Think of MCP like a USB-C port for AI applications."Given the enormous amounts of pain I've heard are involved in actually implementing any form of USB, I think the MCP community may want to find a different analogy!
  • jauntywundrkind
    I assume they're (for now at least) targeting the old HTTP+SSE version of MCP, and not the new Streaming HTTP version? https://github.com/modelcontextprotocol/specification/pull/2...There's some other goodies too. OAuth 2.1 support, JSON-RPC Batching... https://github.com/modelcontextprotocol/specification/blob/m...
  • nomilk
    What are people using MCPs for? I search on youtube and see a lot of videos explaining how MCPs work, but none showing practical uses for a programmer (aside from getting the weather via cursor).
  • goaaron
    Call it a protocol and suddenly it sounds like a foundational technology. Nah, it's just a fancy JSON schema that lets LLMs play hot potato with metadata.
  • johnjungles
    If you want to try out mcp (model context protocol) with little to no setup:I built https://skeet.build/mcp where anyone can try out mcp for cursor and now OpenAI agents!We did this because of a painpoint I experienced as an engineer having to deal with crummy mcp setup, lack of support and complexity trying to stand up your own.Mostly for workflows like:* start a PR with a summary of what I just did * slack or comment to linear/Jira with a summary of what I pushed * pull this issue from sentry and fix it * Find a bug a create a linear issue to fix it * pull this linear issue and do a first pass * pull in this Notion doc with a PRD then create an API reference for it based on this code * Postgres or MySQL schemas for rapid model developmentEveryone seems to go for the hype but ease of use, practical pragmatic developer workflows, and high quality polished mcp servers are what we’re focused onLmk what you think!
  • rgomez
    Shamelessly promoting in here, I created an architecture that allows an AI agent to have those so called "tools" available locally (under the user control), and works with any kind of LLMs, and with any kind of LLM server (in theory). I've been showing demos about it for months now. Works as a middle-ware, in stream, between the LLM server and the chat client, and works very well. The project is open source, even the repo is outdated, but simply because no one is expressing interest in looking into the code. But here is the repo: https://github.com/khromalabs/Ainara. There's a link to a video in there. Yesterday just recorded a video showcasing DeepSeek V3 as the LLM backend (but could be any from OpenAI as well, or Anthropic, whatever).
  • keyle
    I'm new to "MCP"... It says here that even IDE plug into this MCP server [1], as in you don't edit files directly anymore but go through a client/server?It wasn't bad enough that we now run servers locally to constantly compile code and tell us via json we made a typo... Soon we won't even be editing files on a disk, but accessing them through a json-rpc client/server? Am I getting this wrong?[1] https://modelcontextprotocol.io/introduction
  • eigenvalue
    This is great, I was debating whether I should do my latest project using the new OpenAI Responses API (optimized for agent workflows) or using MCP, but now it seems even more obvious that MCP is the way to go.I was able to make a pretty complex MCP server in 2 days for LLM task delegation:https://github.com/Dicklesworthstone/llm_gateway_mcp_server
  • paradite
    MCP is basically commoditizing SaaS and software by abstracting them away behind the AI agent interface.It benefits MCP clients (ChatGPT, Claude, Cursor, Goose) more than the MCP servers and the service behind the MCP servers (GitHub, Figma, Slack).
  • larodi
    Claude is like years ahead of everyone else with tools and agentic caps.
  • chaosprint
    claude needed these those tools in 2024, so having the community contribute for free was actually a smart move.service providers get more traffic, so they’re into it. makes sense.claude 3.5 was great at the time, especially for stuff like web dev. but now deepseek v3 (0324) is way better value. gemini's my default for multimodal. openai still feels smartest overall. i’ve got qwq running locally. for deep research, free grok 3 and perplexity work fine. funny enough, claude 3.7 being down these two days didn’t affect me at all.i checked mcp since i contribute to open source, but decided to wait. few reasons:- setup’s kind of a mess. it’s like running a local python or node bridge, forwarding stuff via sse or stdio. feels more like bridging than protocol innovation- I think eventually we need all the app to be somehow built-in AI-first protocol. I think only Apple (maybe Google) have that kind of influence. Think about the lightening vs usb-c.- performance might be a bottleneck later, especially for multimodal.- same logic as no.2 but the question is that do you really want every app to be AI-first?main issue for me: tools that really improve productivity are rare. a lot of mcp use cases sound cool, but i'll never give full github access to a black box. same for other stuff. so yeah—interesting idea, but hard ceiling.
  • polishdude20
    Ideally in the future we won't need an MCP server when the AI can just write Unix terminal code to do anything it needs to get the job done? It seems using an MCP server and having the AI know about its "tools" is more of training wheels approach.
  • avaer
    Does anyone have any prior art for an MCP server "message bus" with an agent framework like Mastra?E.g. suppose I want my agent to operate as Discord bot listening on channel via an MCP server subscribed to the messages. i.e. the MCP server itself is driving the loop, not the framework, with the agent doing the processing.I can see how this could be implemented using MCP resource pubsub, with the plugin and agent being aware of this protocol and how to pump the message bus loop, but I'd rather not reinvent it.Is there a standard way of doing this already? Is it considered user logic that's "out of scope" for the MCP specification?EDIT: added an example here https://github.com/avaer/mcp-message-bus
  • TIPSIO
    I know Cloudflare has been talking about remote MCP for a while, does anyone have a solid example of this in practice?
  • ursaguild
    The real benefit I see from mcp is that we are now writing programs for users and ai assistants/agents.By writing mcp servers for our services/apps we are allowing a standardized way for ai assistants to integrate with tools and services across apps.
  • zoogeny
    I'm curious what the revenue plan is for MCP authors. I mean, I can see wanting to add support for existing products (like an code/text editor, image/sound/video editor, etc.)But is there a market for stand-alone paid MCP services? It seems these will mostly be usurped by the models themselves sooner or later. I mean if you create a MCP popular enough to actually make money, the foundation model will soon be able to just do it without your service. Almost like you are doing experimentation on high-value agent features for free.Also, something about the format just reeks of SOAP to me. It feels over-engineered. Time will tell, obviously.
  • izwasm
    MCP is great. But what i'd like to understand is whats the difference between MCP and manually prompting the model a list of tools with description and calling the specific function based on the llm response ?
  • jswny
    Does anyone know how MCP servers would be used via the API?I thought they ran locally only, so how would the OpenAI API connect to them when handing a request?
  • esafak
    That makes it table stakes for any agent framework.
  • ondrsh
    This seems to be just implementing tools functionality, no resources or prompts, roots or sampling. I can't blame them.I'm wondering though about progress notifications and pagination. Especially the latter should be supported as otherwise some servers might not return the full list of tools. Has anyone tested this?
  • analyte123
    If you have a clean, well-documented API that can be understood in under 30 minutes by a decent software engineer, congrats: you are MCP ready. I wonder how many discussions there will be about "adding MCP support" to software without this prerequisite.
  • anon
    undefined
  • singularity2001
    hopefully they integrate it with their customGPT approach! I think they already work fantastically especially since the addition of the @ sign and automatic completion to easily include them in normal conversations. the only thing that was missing was access to the local machine.
  • ginko
    Master Control Program?
  • __mharrison__
    Awesome. I'm preparing an AI course and it looks like the APIs are starting to converge finally...
  • spiritplumber
    We need to also add Tron support, otherwise who's going to fight for the users?
  • Tewboo
    Impressive move by OpenAI. MCP support in Agents SDK will greatly enhance AI agent capabilities. Can't wait to see how developers leverage it.
  • Tteriffic
    So we’re back to programming?
  • anon
    undefined
  • dhanushreddy29
    is this the new langchain?
  • nurettin
    Kinda naive question. All you need is a way to convince the LLM to output json according to your schema, then call the function. So what is the use of MCP servers? Why complicate things by adding another layer?
  • cruffle_duffle
    Was wondering if this would ever happen. I wrote an MCP server that hooked up Azure Monitor (or whatever the hell microsoft is calling it) via Microsoft's python SDK so I could get it to query our logs without using command line tools. Took about half a day, mostly due to writing against the wrong Microsoft SDK. It will be nice to let ChatGPT have a crack a this too!
  • shrisukhani
    MCP is now the standard everyone must conform to. Couldn't possibly have predicted that 2 months ago.
  • ogundipeore
    context: a future I'd want re:coding experience is receiving pull requests for the issues I have in my code repo...based on code generated from an agent that understands it....I think cursor editor gets closer to this experience but for a single task
  • anon
    undefined
  • unit149
    [dead]