github Copilot (Opus 4.6) is severely ignoring prompts and dropping tasks mid-generation #191846
Replies: 2 comments
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
|
I don't think there's an answer. There is no remuneration if the product fails or does not perform as it should. We are guinea pigs paying to be on the bleeding edge of AI programming assistance with copilot. Copilot isn't too expensive, but if nobody is supporting your (my) open source application, I wonder how long I will keep going out of pocket for this. I do it for fun, but not to waste money. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
🏷️ Discussion Type
Product Feedback
💬 Feature/Topic Area
VS Code
Body
Hi GitHub Team,
I am experiencing a severe issue with GitHub Copilot (specifically using the Opus 4.6 model) where the AI completely loses context and drops tasks mid-generation. This is making the tool unusable for standard tasks and is leading to a massive waste of paid credits.
The Issue:
I was trying to get the AI to complete a relatively small task and provided it with over 10 clear prompts to guide it.
At the start, Opus 4.6 correctly understood the prompt and created a proper list of 13 To-Dos.
However, shortly after, it seemed to forget the original instructions and reduced the list to just 5 To-Dos.
By the time it finished generating the final response, it only completed 2 To-Dos, completely ignoring the rest of the plan.
The Impact:
This level of context loss is extremely frustrating. It feels like the response quality degrades drastically as the conversation goes on. Since this is a paid service, having to regenerate responses multiple times just to get a fraction of the work done is a huge waste of credits and time. It feels like the model's capabilities are being heavily throttled.
Could the team please investigate why the model's instruction adherence and context retention are failing so poorly?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions