The rumor that performance drops when using the same model on Copilot #182537
-
Select Topic AreaQuestion Copilot Feature AreaCopilot Agent Mode BodyVersion: 1.107.1 I apologize for my negative view on Copilot, but I'm struggling because there aren't many reference examples or credible information available locally. I often hear rumors that even when using the same model, GitHub Copilot Chat performs significantly worse compared to the original. I tried Claude Code a few months ago and remember feeling that even with the same Sonnet 4.5 and prompts, Claude Code seemed smarter. When I try selecting Gemini or GPT models in Copilot, I often find them unusable. If there really wasn't a significant performance difference between Copilot and others, GitHub Copilot should be attractive due to pricing, model selection options, and the Microsoft brand, but in reality, it seems to have few core users. So what's actually going on? Is it true that GitHub Copilot Chat can't deliver the model's full performance? Am I expecting too much from AI? Or am I just bad at using it? I need to automate most of my coding work by myself for my job, and I haven't written code manually for a while, but I'm constantly struggling with adjusting skills, copilot-instructions.md, agent.md, etc. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
The performance gap is real. My recommendation: Good luck! |
Beta Was this translation helpful? Give feedback.
-
|
Thank you. I'll also consider Cursor. But it seems like there's a bug and I can't use Skills and Subagents anymore. That's some bad luck. It would be best if I could get everything done with Copilot, so I hope it gets better soon. |
Beta Was this translation helpful? Give feedback.
The performance gap is real.
Claude Code and similar tools give the model more freedom to think and act, which is why Sonnet feels "smarter" there even with identical prompts.
My recommendation:
If you're already hitting 300% usage with Opus and constantly tweaking configs, you might want to try Cursor AI (using this, Opus feels like Opus). It uses the same models (Claude, GPT-4, etc.) but it is several times better. (smarter context handling, less friction, faster)
After I started using this code editor, I cut my debugging time in half.
Good luck!