Serious Issue with GitHub Copilot: A System That Fails to Deliver and Harms Projects #162634
Replies: 6 comments 5 replies
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
|
Testing this issue is incredibly simple! All you need to do is provide GitHub Copilot with a piece of code and ask it to analyze it. Then, ask how much of the output was based on actual code versus assumptions. In the first attempt, it will admit to guessing 95% of the analysis. Each time you request a more accurate breakdown, it reduces assumptions by around 30% per iteration. Meaning, for a fully precise analysis, you need at least five rounds of corrections—and even then, I’m still not convinced it delivers truly reliable results. |
Beta Was this translation helpful? Give feedback.
-
|
Yep this 100% this. Its a god tier coder packaged in Dory from Finding Nemo. only to ouput some overengineered code thats nowhere near capable of doing what we just iterated over. or its constant injections of what it thinks your project needs or doesnt need ... that core function that does all the work... nah lets trunctuate 90% of that - ... then trip over itself trying to figure out why theres so many errors.... or endless reliance on injecting compatibility or fallback code that MASKS THE FAILURE of the actual code flow - god tier annoying this specifically... its VERY bipolar - it has good days and bad days.... You cant work reliably on anything 1000+ lines, |
Beta Was this translation helpful? Give feedback.
-
|
I just gave copilot a list of things to do. it charged me a premium credit but didn't do 90% of the work. I had to ask it to go back and do the actual work it didn't do. It's response was"Oh yeah your right" and then charges me another credit. |
Beta Was this translation helpful? Give feedback.
-
|
another is when it does something that doesn't work and then it pretends like the code was already like that and that it isn't what created the code. credit charged and then another to fix the mistakes it made previously. I am starting to thing it is actually engineered this way intentionally. |
Beta Was this translation helpful? Give feedback.
-
|
I literally had to explain to it the pipeline for a metaclass in python and using the metaclass to cache created objects using an id so only a single instance can exist for a given id. It was arguing me with about it not going to work because and why is "Robust" it favorite tag line to use for almost all the code it produces?? Code that 80% of the time doesn't work right, there is nothing "Robust" about it if it doesn't work. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Product Feedback
Copilot Feature Area
Copilot in GitHub
Body
Let’s be real—this platform is not a place to play games with users. By releasing this version of GitHub Copilot, you’ve made a serious mistake, and honestly, it’s baffling.
I’m writing this with full bluntness so you understand that the product you’re offering can actually cause real damage in the real world. Developers—regardless of their experience level—haven’t got time to waste on this nonsense. Their time is valuable, and it’s not something you can afford to gamble with.
I am deeply dissatisfied with GitHub Copilot in VS Code. This tool has proven to be highly unreliable, falling short of its promises and causing significant damage to my project. Microsoft should reconsider promoting this tool as "AI assistance" when it fails to perform adequately in real-world scenarios.
The primary issue is that, despite granting Copilot full access to all my project files, it only analyzed about 10% of the code and completed the rest with assumptions. This is unacceptable for a tool intended to assist developers. For example, in the DASHBOARD_ANALYSIS_COMPLETE.md document it generated, the initial version consisted of 60% speculative content. This included fabricated details about API structures, authentication flows, database relationships, and file structures—despite having access to the complete codebase. Even after repeated requests to base its output solely on the provided code, the revised version still contained 30% speculative content. Critical sections such as model relationships (90% guessed), database schema (100% guessed), frontend integration (100% guessed), and response formats (100% guessed) remained highly inaccurate.
This is not a minor shortcoming; it is a critical flaw that can derail actual projects. I spent over two weeks grappling with Copilot, resulting in multiple project failures before I identified the source of the problems. Even with full code access, Copilot only processes a small portion of the code and fills in the gaps with unchecked assumptions, without warning users of its limitations. This poses a serious risk, as developers may rely on its outputs and unintentionally compromise their work.
To compound the issue, Microsoft charges $10 per month for this unreliable service. Considering the time lost and the damage to my project, I believe compensation is warranted for the harm caused by this tool.
For Copilot to be effective, it must thoroughly analyze all provided code—including controllers, frontend dashboard code, database migrations, configuration files, and middleware implementations. Currently, it only reviews a fraction of the code, wasting developers' time and jeopardizing their projects. I strongly urge the GitHub team to address these fundamental issues and improve the system’s reliability.
Regarding Compensation
It’s almost laughable—Microsoft is charging $10 a month for a tool that feels more like a liability than an asset. Let’s be real: by offering GitHub Copilot in its current state, you’re essentially using developers like me as unpaid alpha testers. We’re not just users; we’re doing the heavy lifting of testing your half-baked AI, debugging its mistakes, and reporting its failures—all while paying for the privilege.
Instead of charging us, you should be compensating us for the time and effort we’re putting into making Copilot usable. After all, we’re the ones dealing with the fallout when it hallucinates code, fabricates documentation, and derails projects. If you’re going to treat us like beta testers, at least have the decency to pay us for our work.
What Copilot Needs to Do
For Copilot to be worth its salt—let alone the $10 monthly fee—it needs to:
Thoroughly analyze all provided code: No more skimming 10% and guessing the rest. It should dig into every file—controllers, frontend, database migrations, configs, middleware—and base its output solely on what’s there.
Stop speculating: Fabricated content has no place in a developer tool. If it doesn’t know, it shouldn’t guess—it should flag the gap and let you fill it.
Warn users of limitations: Transparency is key. If it’s only processing a fraction of the code, it should tell you upfront.
Until it can deliver accurate, reliable assistance, it’s not just underperforming—it’s actively jeopardizing projects.
A Call to Action
To the GitHub team: this isn’t a minor hiccup; it’s a serious issue that undermines trust in Copilot. Developers deserve a tool they can rely on, not one that costs them time, money, and project stability. Please prioritize fixing these fundamental flaws—improve the system’s ability to process entire codebases accurately and eliminate the guesswork. Until then, it’s hard to see this as anything more than an expensive experiment we’re all unwillingly funding.
🚨 A Warning to All Copilot Users 🚨 I strongly urge all Copilot users never to trust this tool blindly. After every usage, ask Copilot to tell you how much of the code was based on speculation or assumptions. You will be shocked by the percentage.
This is nothing more than a toy—a vanity project for Microsoft to say, "Look, we’re in the AI game too! We’ve done something impressive!"
But in reality? That’s all it is—just bragging rights, nothing more.
Beta Was this translation helpful? Give feedback.
All reactions