Suggesting weighted geometric mean profile metric #188374
Replies: 2 comments
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
|
I like the structured thinking here. The recency weighting and square-root normalization directly tackle repo spam and one-hit wonders. Your scenario walkthrough shows it balances volume vs quality better than raw stars. Where I'd hesitate: computing this for every user in real time. GitHub serves billions of repo views daily; calculating weighted sums across all a user's repos on profile load would be expensive. They'd likely need a precomputed field updated nightly. Also, "active" detection is messy. Is a repo with only issue activity "active"? What about a dependency update via Dependabot? GitHub's current "pushed_to" event might not capture meaningful contribution. Your point about contributions to others' repos is the biggest gap. A core maintainer of React with 0 personal repos would score near zero. That's a major flaw. If you want to push this, I'd suggest:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Product Feedback
Body
I couldn't find the best place to give GitHub product suggestions so thought this would be the best place to do it.
Problem
With rise of AI slop and contributions, stars are no longer a reliable metric to understand the technical ability and capability of a GitHub profile.
My suggestion is to have a star to repo ratio. While this isn't perfect as some actors would slip through. At the very least, it'll filter out folks and shows GitHub cares about quality over volume.
The metric
Parameters:
Recency Weights:
Scenario 1: one viral hit
Dev A
1 repo, 10,000 stars, still active
Scenario 2: consistent contributor
Dev B
50 repos, 40 active (avg 500 stars), 10 stale (avg 200 stars)
Dev B beats Dev A
Scenario 3: repo spammer
Dev C
200 repos, 190 have 0 to 2 stars (avg 1), 10 have 100 stars, all stale
Low score creating empty repos doesn't game it.
Scenario 4: someone who abandoned but HQ contribs
Dev D
5 repos, all 3+ years stale, but 8,000 stars each
Still scores well (past impact matters), but less than if they were still active where it would be:
Scenario 5: active contrib, modest stars
Dev E
with 30 repos, all active, avg 50 stars each
Modest but respectabl reflects real-world utility.
Final Ranking: D > B > A > E > C
Where does it fail?
No metric is perfect. Each comes with its own bias of sorts.. but at least this gets us somewhere?
Beta Was this translation helpful? Give feedback.
All reactions