-31

TL;DR: The opinion experiment revealed that users aren't looking for subjective debates; rather, they most often seek detailed explanations. To capture this value, we propose a radical shift: stop closing questions and introduce a new curation model—The Workshop for collaboration and The Archive for curation. Sit down, grab a drink, and start reading.


Years ago, Shog9 wrote that managing a community is like managing water in permaculture. He described laying “contour strips” to catch the rain, managing the torrents that threatened to wash the farm away.

Shog9 wrote from the perspective of managing abundance. I grew up in the southwestern deserts of the United States and know what it means to run out of water.

I know the feeling of scrambling up the walls of a slot canyon to escape flood waters brought by a coming storm, watching precious resources flash by and strip the land away. It is not lost on me how slight miscalculations of water management can be the difference between a landscape that survives and one that withers away.

For a long time, the water of this platform–user engagement, and contributions–was plentiful on Stack Overflow. We built dams to control that flow. This made sense in the past and was the correct call. Stack Overflow, once a plentiful paradise, is now experiencing a drought. Somehow, we find ourselves somewhere in the desert with a dam. Our fields are parched, yet our sluice gates are almost entirely closed.

The Data: Opinion Question Experiment (The Rain)

After running the opinion-based question experiment for just about two months, we know it's still raining upstream, and we have water sitting in a reservoir just behind the dam.

Are closures for “opinion-based” reasons going down and are we seeing more “opinion” questions?

In short, yes. Though this should not surprise anyone, given that they can’t be closed and there is no real, defined set of rules. Anyone poking around on SEDE can see that question closures are continuing to decline.

Are we seeing ongoing activity on these questions?

Yes. Opinion-based questions are getting about three times more replies than regular questions (answers + comments). As of this writing, there are 1600+ opinion-based questions, 78% of which have at least one reply. In terms of engagement, this is a clear signal of demand. Our month-over-month numbers show an increase of 16% in engagement (questions, answers, comments, replies), which is unusual given the typical slowdown in participation we see around this time of year. Some caveats: novelty is a potential contributing factor, but only time will tell us by how much. By our count about 85 replies were about telling the asker that their questions should have been asked as traditional Q&A (which is less than 2% of overall replies). This however isn't an easy thing to track, so there may be a few unaccounted for.

Is any of this good?

Our specific goal was to increase engagement, and to that end, this is a measurable success. Given the 16% lift in activity we observed, these results suggest this is a viable strategy to stabilize participation and open new avenues for growth. Crucially, that 3x increase in replies came with a lower flag rate for these new question types (2.9% vs 3.26%). This indicates we can expand participation without sacrificing quality. While technical bugs caused some content to leak into the wrong queues or question lists, these are solvable engineering issues, not existential threats.

Even more interestingly, this wasn't just new users asking lazy questions. The median reputation for users asking these opinion questions was 10x higher than those asking troubleshooting questions (113 vs 11), and their accounts were significantly older (median age of ~ 7 years vs ~ 2.5 years).

This tells us that experienced developers are hungry for these conversations. And the community valued them: nearly 30% of non-deleted opinion questions received at least one upvote, proving that when we allow space for nuance the communities find value in it.

We won’t ignore the fact that this is far from a finished product, and that there is work to be done to get there, but if you have found yourself concerned about the drought of participation, this experiment offers a clear signal: the rains have not left us entirely; we just need to adjust how we capture them.

Our Learnings: “Opinion” is a misnomer

For years, we have treated opinion-based as synonymous with subjective. They both have been defined loosely as content that dilutes the factual purity of the content on the platform. A light analysis of what users are asking in this experiment revealed four distinct intents that staff categorized into four groups:

  • Explanations: Focusing on conceptual understanding, learning, or finding why rather than just how.
  • Implementation: Focusing on achieving an operational result, like getting code to run.
  • Improvement: Improving a working solution.
  • Other: Self-promotion, or any content that didn’t fit the other categories.

Ironically, we found that the “Troubleshooting” type we introduced as an option during the experiment was almost entirely categorizable under our new implementation category: users trying to get bugs fixed.

The revelation came when we analyzed the question users submitted using the “Opinion-based” option provided during the experiment. These were not just random solicitations for comments or opinions, which was a concern for both the company and the community, but a majority appeared to be questions asking for explanations. Users were asking for help making architectural decisions, understanding why a framework behaves the way it does, and weighing considerations before writing code.

Two side-by-side bar charts compare question categories, showing that "Opinion-based questions" are primarily categorized as "Explanation" (approximately 75%), whereas "traditional Q&A" questions are predominantly categorized as "Implementation" (approximately 63%). This highlights an inverse relationship between the two question types, with the remaining categories (Improve and Other) representing minor percentages in both charts.

In the current curation system, curators close questions as opinion-based because they don’t fit our strict “one right answer” format. In reality, a lot of these are questions of conceptual understanding, based in facts, reasoned through collaboratively, and grounded in truth. By damming these up for years, we have incidentally been blocking meaningful attempts by people trying to gain deeper understandings and have “why” conversations that help them grow. “Opinion-based” often just meant “nuance required.”

The historical struggle

Despite the reality of declining participation, we seem to have shut the sluice gate even tighter. Our current model of curation has resulted in a state where nearly 40-50% of all incoming questions are ending up closed every month. This high rejection rate isn't a failure of the people doing the curating; it is the inevitable result of a system designed for flood control operating in a time of scarcity. The dam doesn’t help anymore. We are aggressively filtering for a volume that no longer exists.

The question of closure has been a topic of conversation for a long time. Let's have another! That linked post is still worth a read; functionally, nothing that was laid out there has changed all that much. Closures are more user-friendly than they were 12 years ago and usually take far fewer votes to do, but essentially every issue raised in that question and most of the linked questions remain today. The vast majority of casual visitors’ and new askers’ questions do not naturally align with our content standards. Various attempts over 16 years to change that reality, from both the company's side and the community's side, have not succeeded. This has cost us.

Our curators built a resource that every developer on Earth has relied on at some point in the last 16 years. That was done using a system that worked for an era of abundance, when we could pick and choose only those questions that mattered most. But it is becoming clearer that the system we currently have is considerably out of alignment with the needs of developers today.

We have had this structural problem for a while. Renaming "close" to "on hold" didn't fix it. The problem isn't that the dam is rude, it's that it works too well. We don't need a new dam—we need to let more water through.

Consider this principle: “If a question is closed and goes unanswered forever, we'll never get a chance to see who it could have helped.” The answer is what gives value to the question – both to the asker and to future visitors. Maybe it turns out an answer is only valuable to the asker, but an unanswered question is valuable to no one.

The proposal: The Workshop & The Archive

This is a concept; no development is underway. While this introduces new edge cases, we believe it addresses our most pressing problem: our current curation model is designed for an era of abundant questions that no longer exists.

The Workshop: The contour strips

A scenic landscape featuring rolling hills where agricultural crops are planted in neat, curved rows that follow the natural terrain, an example of contour farming. A dirt track winds through the fields, leading toward green, brush-covered slopes, with a view of the blue ocean and horizon visible in the distance under a clear sky.

In Shog9’s answer, he spoke of contour farming: a way to catch water and allow it to be absorbed into the fields rather than running off.

The Workshop would be our new contour strips. Just like contour strips are intended to be the place to catch all the water, The Workshop would be intended to catch almost every appropriately on-topic, answerable question on Stack Overflow.

Think of this as the public evolution of the Staging Ground. We want to take the tools that worked in that sandbox (nudges, templates, structured feedback) and make them the default experience for everyone. Spam and abuse would still be deleted, but the strict rules against conversation would be lifted here.

The Workshop would be the single entry point for all questions and utilize a threaded reply structure with improved notifications rather than the strict Q&A format. This would allow for messier collaboration. Incomplete or unclear questions could be discussed back and forth without penalty. We want to keep the conversation open while the water soaks in. The purpose is to see what value these posts might hold for the future.

With proper irrigation in place, we’d want to see organic growth:

Beyond that? Minimal friction. If you wanted to help by saying, “Have you tried checking your variable names?”, that would be a valid form of irrigation. We wouldn’t be checking for formatting perfection: in The Workshop, the first and foremost goal would be to help the asker.

The Archive: The harvest

The Archive would be the “harvest”—the curated knowledge we offer the developer community at large. You are likely thinking, “But how do we curate if we cannot close?”

Introducing Expert Endorsements

Expert Endorsements (a working title), a unique vote reserved for some community members, would be a validation of the question’s future value from the established curators on Stack Overflow. It would be the opposite of a close flag. Getting one means you have created content that is the cream of the crop; it means your question can be found easily and that it should be surfaced to future searchers and learners, because it's good.

Instead of voting to close, you’d vote to elevate. The system wouldn’t be built to hide or remove; it would be optimized to highlight what's worth looking at. Curators would not need to classify and categorize every single incoming question, only pull out the most useful posts and cultivate them for the harvest.

Today, a small, dedicated core of you are doing the majority of the work of maintaining the knowledge base. This is meaningful work, but it is not always popular. Few do it, and those who do earn criticism in the wider developer community for just trying to preserve and improve the knowledge for others to use. Further, this system is structurally predisposed to cause burnout. Despite sincere intentions, the current tools channel you toward burying questions rather than improving them, and ask you to apply those tools to each one of the tens of millions of questions on the platform.

We want to build tools that allow you to truly curate— selecting the best content for presentation, distribution, and refinement. We would need new tools, like proper mapping and support for duplicate content, but for now we only want to get the high-level idea out there. We want to move past the era of “curation as cleanup duty.” Instead, we would like to position the most committed members of our communities as what they are: a cast of unique individuals who should be celebrated for their commitment and expertise. These are people who leverage their extensive knowledge to make the best knowledge on the Web shine.

Our request

This idea comes from rank-and-file staff after confronting the results of the opinion experiment and the reality of our platform activity data. It is not anywhere near a finalized roadmap, since we wanted to get it on Meta as soon as possible for discussion, but we are actively exploring these notions of "irrigation" and "harvesting."

However, to make The Workshop work, we might need to make some changes to how the core Q&A pages work for that space. We would probably want to move towards a threaded discussion format to support The Workshop. This presents a critical challenge: how do we visually protect The Archive when content is promoted from The Workshop? If a discussion page in The Workshop looks like a standard forum thread, we risk losing the prestige and discoverability of the best answer. We cannot let the harvest be buried by the chaff.

We need you to help us design the harvesting process:

  1. The mechanics: How does a discussion in The Workshop become an artifact in The Archive? Is this a physical migration (moving the Question and its best Answer to a new page) or a visual transformation (signaling high quality within the existing thread)? What are the mechanics of promoting a conversational chain in The Workshop into a standalone, Endorsed Artifact in The Archive? For example, if a “messy” Workshop question receives a brilliant answer, does the question need to be pruned before it enters The Archive, or can the answer stand on its own?

  2. The curator: Who should have the power to ‘Endorse’ a question or answer? Should it be reserved for Gold Badge holders, elected moderators, or a new tier of subject matter experts?

  3. Visual distinction: On a discussion page in The Workshop, when a curator gives an Expert Endorsement, how should it look? If the rest of the page utilizes a threaded discussion format, does the Endorsed Answer get "pinned" to the top? Does it get a gold frame? Should the classic answer styling be reserved exclusively for Endorsed Content to signal its status as an artifact on The Archive? And crucially, how do we allow users to toggle between the “messy” collaborative view and the clean “curated” view?

To summarize: we’re re-imagining Stack Overflow as a platform where curators can focus on elevating high quality content in order to maximize our yield of worthy content, rather than needing to be focused on burying the poor content because we’re holding on to a system that is designed to prevent high volumes of participation which are no longer our reality. We want to hear your feedback on this vision, either in the form of answers to this post, or via email if you don’t feel particularly inclined to share your opinions publicly. For email, send your feedback to: [email protected] with the subject header including: Attention: Hoid

If you don’t think that change is possible or that question closures don’t need a serious rework, that is feedback we want to hear: please articulate why the status quo remains viable in light of the current state of the platform’s ecosystem, if you hold that view. If, like us, you agree that systemic change is needed to meet the current reality of declining participation, your feedback on the above questions is particularly important to ensure we uphold the principles that ensure Stack Overflow remains a source of reliable, high-quality content, for generations to come.


We want to acknowledge that we’re posting this right before the holidays, and like many of you, we will also be taking time off. Rest assured, no immediate work on this concept is planned, and we are at a highly conceptual point in the process. We also take an end-of-year hiatus, and won’t be checking back in until the first week of January at the earliest. Pace yourself accordingly, as there will be multiple opportunities to hammer out various details as we address what has been brought up before we make any decisions. Happy holidays!

19
  • 46
    It's not exactly surprising to see a 16% increase in engagement, when ordinarily this stuff would have been closed within an hour. Why are we assuming this is because of the interface you decided to use? Commented Dec 16, 2025 at 18:01
  • 33
    How much of that 16% increase in engagement was useful content that should be in the Q&A repository instead of noise that makes it more difficult for a future reader to get the answer they need? Commented Dec 16, 2025 at 21:53
  • 4
    The workshop = Staging Ground and the archive = Traditional Q&A just without closure of opinion-based questions? We kind of tried to go there for years, but whenever we push for quality, sometimes breaks. Commented Dec 17, 2025 at 6:33
  • 13
    Gotta say that the post referencing Shog9 feels... good. Thank you. It also feels like this is the right path. Can't back it up with anything tangible, but this is not old cranky overworked Stack Overflow and it's not unhinged Reddit. The sweet spot is in the middle and this looks like it is going there. I can also speak from a personal perspective; I felt more liberated getting the "new" style of questions, there was far less stopping me from replying which just made the whole experience more enjoyable. There is little to no voting going on on such questions though, from what I could see. Commented Dec 17, 2025 at 9:13
  • 13
    I'm happy this post has an actual Vision that's different from "throw AI at everything". The bar is set low, but you jumped it Commented Dec 17, 2025 at 9:28
  • 9
    "Crucially, that 3x increase in replies came with a lower flag rate for these new question types (2.9% vs 3.26%)" Given that it's essentially a "chat" format, an increase in replies feels a little unavoidable regardless of quality. And similarly, when there are less restrictions on posts, less flags also seems natural. I'm sceptical these actually relate to quality, rather than just: We let people post more stuff, and they did. Commented Dec 17, 2025 at 9:31
  • 39
    "This tells us that experienced developers are hungry for these conversations". Not necessarily. I've seen several very high rep users post opinion questions simply because the UI option is completely unclear about what the options mean. You need to account for that before making such a conclusion. Commented Dec 17, 2025 at 12:42
  • 12
    "To capture this value, we propose a radical shift: stop closing questions and introduce a new curation model" OK, you're officially saying goodbye and good riddance to the Stack Overflow that the world know as a high-quality Q&A repository, then. Please don't use such language or self-descriptions moving forward anymore if you implement this. You will have just created a slightly more restrictive /r/programming, instead. Commented Dec 17, 2025 at 19:29
  • 3
    "Are closures for “opinion-based” reasons going down and are we seeing more “opinion” questions? In short, yes. Though this should not surprise anyone, given that they can’t be closed and there is no real, defined set of rules. Anyone poking around on SEDE can see that question closures are continuing to decline." What's the point of this section? If you already know the answer, why ask the question? Also, why bother talking about it if you aren't going to adjust/weight it based on the total number of questions asked? I guess because it would not tell the story you want it to. Commented Dec 17, 2025 at 19:30
  • 21
    "Yes. Opinion-based questions are getting about three times more replies than regular questions (answers + comments)." Obviously, because you don't let us close them or hide them by downvoting. Of course when you force feed bread down ducks' throats, they get fat. Commented Dec 17, 2025 at 19:32
  • 24
    All this boils down to, once you get past the cutesy story, is "we have another radical idea that we want to implement instead of doing the work of fixing all the past ideas we've implemented". Stop trying to implement new things and fix/improve the things you already have. Staging Ground. Threaded Comments. The Ask Wizard. Tag hierarchies. Answer versioning. Migrating rep-based privileges to being usage-based. Proper useful migration beyond 5 sites. There's so much good you could do here before you even get close to "let's throw away the Q&A model that literally built this company". Commented Dec 17, 2025 at 19:56
  • 17
    I just skimmed and probably have quibbles with some of it, but I really like the idea of promoting quality content into a library instead of trying to filter out content. I feel strongly content should be moved to another location where only trusted curators get to work on it. This helps make it more coherent and discoverable, and could allow content that is more current to be merged into old possibly outdated content organically. Commented Dec 17, 2025 at 20:44
  • 5
    "The vast majority of casual visitors’ and new askers’ questions do not naturally align with our content standards." This is maybe the most important statement, describing the root of the problem. We had the StackOverflow Academy that unfortunately never started, we had the unfriendly comments robot, the new contributor badge, the question wizard, the staging ground. Nothing worked. We might as well give up and write the knowledge library ourselves. This kind of seems to be the gist (at least how I understand it). Commented Dec 18, 2025 at 7:34
  • 20
    More clicks on the shiny thing means the shiny thing is shiny! Throwing out data points without context has become the default self-validation of the dev team. If you give people an unregulated forum they will fill it, this doesn't mean it's useful only that you are getting the requisite ad-clicks your overlords demand of you. It seems fairly likely that the lower flag rate is because all the users that believe in active curation have blocked this experiment as yet another misguided click-bait project. Commented Dec 18, 2025 at 14:15
  • 13
    "in The Workshop, the first and foremost goal would be to help the asker." no thanks, I'm not in it to help an individual, if this becomes reality, I'm out. I'm also not in it to dig through the Workshop mud for diamonds. I'm here because there's a general consensus that all content should be polished, especially the questions. Not to let them rot away in an archive. Commented Dec 19, 2025 at 12:52

20 Answers 20

53

I commend you for the efforts to bring more questions to Stack Overflow, but let me highlight some problems.

Promoting questions to The Archive

We don't know which questions are worth preserving; we only know which ones are not worth preserving anymore.

All questions that can be answered should be publicly visible so that they get a chance at becoming valuable. Only after some time can we judge whether the question turned out to be a useful contribution or misleading. We can cast downvotes and upvotes from the time it is posted, but even heavily downvoted questions can result in helpful answers. And it is answers that we are optimising for, not questions. If a terrible question received good answers, we can rewrite the question ourselves to match the answers. But a good question will only become useful once it receives a good answer. It may take years before a good answer appears. Or maybe it has taken curators years to observe that a bad question attracted good answers and needs to be improved. Either way, it's only after the question has been publicly exposed for a long time that we can truly assess its usefulness.

Threaded replies are a necessary evil

Stack Overflow isn't the only place where I find answers after searching on Google. From time to time, I get results on discussion forums and I hate those. Why? Because they are designed for solving the immediate problem of the asker, not for presenting quick solutions for future researchers.

Threaded replies are good at getting to the solution, but they are an absolute time-waster when it comes to reading them. And selecting the best/accepted/working/preferred solution isn't making it a lot better. Because the solution took multiple messages to get to, it isn't contained in a single post, but rather the whole conversation thread is relevant. The reader must go through all the posts to fully understand how they arrived at the solution that worked and why.

Stack Overflow excels because it is designed such that the answers are written in a self-contained way and aimed at everyone with the same problem. You only need to read one post to understand what the proposed solution is. The next post contains another alternative self-contained solution. Posts are ordered by how useful everyone thinks they are, not by time or by what the asker thinks. It's quick and easy to find solutions.

We don't allow working on solutions in the Staging Ground because we don't want to lose valuable answers. If curators answered questions in the Staging Ground, a possibly good solution could get buried in a threaded response without getting posted as an actual answer. And what you are proposing is exactly that!

Threaded replies only work for temporary comments when trying to clarify the question and should be deleted once it is ready to receive answers.

2
  • 26
    I think your second point is excellent and very important. Forum/ reply-style pages are optimized for the asker, for engaging with people. Q&A pages are designed for the reader, for consuming information as efficiently as possible. SO was designed to sacrifice the former to elevate the latter; personally, I'm sure that fusions of the formats can be successful, but it's absolutely critical that we don't compromise the reader experience, because that's most people who come to SO. SO was built to optimize for pearls, as the adage goes... if we start burying them, then there's little left. Commented Dec 16, 2025 at 20:55
  • 2
    Possibly threaded replies could also work as a path to a solution if someone were incentivized when it was all said and done to summarize into a suitable-for-reader conclusion. Commented Dec 17, 2025 at 19:33
47

The premise of this question strikes me as incoherent in several ways.

Before I outline my claims, I'll offer my counter-theory: engagement is a stupid goal. The goal should be that people should find what they want (and click on an ad or two in the process). If their question is already answered, there's no need for them to engage. Technology doesn't change that fast. The site has successfully built a body of knowledge that answers a vast number of questions, with no further engagement.

On to my specific arguments.

The question claims that the questions we are closing frequently contain the nuclei of good questions. I very much doubt this. Take away the duplicates, the homework, and the completely off-topic questions, and you have already 'dammed the flow to a trickle.'

The question presumes that the experiment has successfully diverted questions that would otherwise be closed, and thus the correlation of reduced closing is in fact causation. Reading the front page, I still see a torrent (to stick to the damp metaphors) of close-worthy sewage from people who are not getting the message. Heck, I continue to see first-time questions that need to be in staging ground just showing up. If closes are down, perhaps it's because more closers have gotten fed up and departed.

Finally: this proposal seems to me to essentially describe what I see happening in staging ground, where a discussion attempts to lead the questioner to state a coherent question, or solves their problem in the comments. Why not invest in that instead of launching on a hyper-complex alternative?

Postscript: To make my views clear: If you-all want to try to make some money by adding a conversational help structure in parallel with Q&A, I'm 100% neutral. As I see it, 99% of the challenge is going to be building the experience -- and the hardest 99% of the 99% is going to be building the expert community to provide the value and moderate the process. The existing process and community does not transfer / translate. The comments on this question are one barometer into the mood of the existing community, and it isn't encouraging.

4
  • 11
    This! If their question is already answered, there's no need for them to engage. Only replace "engage" with "ask again", because we want them to engage: by voting. Commented Dec 17, 2025 at 13:16
  • 3
    They will never have enough rep to vote (I hope) if all they ever do is find useful content. Commented Dec 17, 2025 at 19:56
  • 3
    Engagement is a stupid goal if you want the site to be useful, yes. But it's the only thing they seem to know how to monetize. (Much like every other Internet business nowadays.) Commented Dec 17, 2025 at 20:41
  • 13
    I don't follow. Money comes from clicks on ads. If a person comes in 20 times to look at 20 answers, and sees 10 ads every time, there's the money with no detectable 'engagement'. Engagement is a ignorant-CFO surrogate for actual revenue. Commented Dec 17, 2025 at 20:56
43

The opinion experiment revealed that users aren't looking for subjective debates; rather, they most often seek detailed explanations.

The community told you this before you went through with the experiment, with a feature set, that made it impossible to submit our opinion or moderate the questions that were asked. You ignored our feedback.

To capture this value, we propose a radical shift: stop closing questions and introduce a new curation model—The Workshop for collaboration and The Archive for curation. Sit down, grab a drink, and start reading.

You are going to ignore our feedback. I am positive my feedback will be ignored, but this sounds like a horrible idea.

After running the opinion-based question experiment for just about two months, we know it's still raining upstream, and we have water sitting in a reservoir just behind the dam.

The experiment failed because you implemented a half-baked feature, without listening to the feedback you received, and essentially tried to relaunch the entire Stack Exchange Q&A model overnight. We told you not being able to issue downvotes was a bad idea. We indicated being unable to close the questions was a bad idea. We told you the entire concept of asking a question, seeking strangers' opinions on a matter, was a bad idea. You ignored all the feedback.

In short, yes. Though this should not surprise anyone, given that they can’t be closed and there is no real, defined set of rules. Anyone poking around on SEDE can see that question closures are continuing to decline.

You allowed users to ask questions that cannot be moderated by the community or the elected community moderators, proceeded to not moderate the content yourself, and then point out question closures decreased? Of course they decreased – you created a situation that would only result in the results you were expecting.

As of this writing, there are 1600+ opinion-based questions, 78% of which have at least one reply. In terms of engagement, this is a clear signal of demand. By our count about 85 replies were about telling the asker that their questions should have been asked as traditional Q&A (which is less than 2% of overall replies). This however isn't an easy thing to track, so there may be a few unaccounted for.

So is there an actual demand or did users not understand what they were doing, ended up asking a question seeking an opinion, when they actually wanted to ask a question seeking an actual solution? In my personal experience, every question I looked at, was improperly submitted seeking my opinion.

By damming these up for years, we have incidentally been blocking meaningful attempts by people trying to gain deeper understandings and have “why” conversations that help them grow. “Opinion-based” often just meant “nuance required.”

It isn't hard to ask a question seeking a deeper understanding of a topic, I have done it countless times, and I have answered hundreds upon thousands of those types of questions. It just takes effort on the individual asking the question.

Expert Endorsements (a working title), a unique vote reserved for some community members, would be a validation of the question’s future value from the established curators on Stack Overflow. It would be the opposite of a close flag. Getting one means you have created content that is the cream of the crop; it means your question can be found easily and that it should be surfaced to future searchers and learners, because it's good.

Quora has something similar, I don't use Quora, because the users with "expert endorsements" charge for their endorsements (i.e answers). I want no part in a community where this could become a reality. The predicted outcome of "free votes" being issued was seen, users voting for each other, and getting them suspended but the behavior is difficult to track so lots of users are getting away with that behavior.

We would probably want to move towards a threaded discussion format to support The Workshop.

So you want to turn Stack Exchange communities into Reddit subreddits? I have been a member of forums in the past, they never last; the only community I have ever seen last more than 5 years, is Stack Overflow and that's because it's NOT a place for discussions. If threaded discussions become a reality. I won't be part of the community. I will delete my profile.

I have purposefully been extremely harsh in my answer. I wasn't as harsh in my past feedback to the other failed horrible experiments, and those experiments still went live, with the predicated outcomes the community shared. So I have no doubt this new experiment will be implemented despite the community hating it.

6
  • 3
    "user-friendliness" is different than general "unfriendliness". the first is about the experience the system gives the user. the second, in the context of the discussion you are talking about, is usually about framing a curator's intentions towards an asker as ill. Commented Dec 17, 2025 at 0:01
  • 8
    @starball user-friendly, welcoming, valuable, useful, engagement, participation, collaboration, experiment, success, activity, demand, growth, learning, improvement, revelation, explanation, deeper understanding, reality, alignment, without penalty, endorsements, popular, ecosystem just obscure Prosus N.V.'s true intention, which can really only be artificially inflating the platform's financial worth in an upcoming sell-off, preferably before the AI bubble bursts. Last addition to the privacy policy even shows that's anticipated. Commented Dec 17, 2025 at 2:41
  • 4
    @user4157124 "which can really only be" that sounds like speculation. Commented Dec 17, 2025 at 9:32
  • 1
    I get that you mention it at the end, but I wish you weren't as harsh here. The post said "This idea comes from rank-and-file staff after confronting the results of the opinion experiment and the reality of our platform activity data" - I don't find statements like "You are going to ignore our feedback. I am positive my feedback will be ignored, but this sounds like a horrible idea." to be constructive. All the CMs surely know already that many Meta folks feel ignored. How does the harsh tone accomplish a more desirable outcome? Commented Dec 18, 2025 at 18:16
  • 2
    @cocomac what would be a better way to express that sentiment that would be constructive? Commented Dec 18, 2025 at 18:37
  • 2
    @cocomac - I was less harsh with my last feedback, and all I got was a pat on the back, which doesn’t solve any problems I face while participating in the community. Decided it certainly couldn’t hurt to be less harsh? If you disagree with what I said, you can certainly, just issue a downvote. Commented Dec 18, 2025 at 19:31
37

Getting rid of traditional Q&A is going to alienate most of the community, and this new system takes much more buy-in from the very users you're alienating.

Everyone currently responsible for the curation of this site, from moderators to reviewers to regular users, has been doing so under the traditional Q&A format for decades now. Users ask questions, others answer them, and everyone else on the site contributes by voting, flagging, and editing.

Replacing the Q&A format that StackOverflow was built on with messier discussions that will need way more time and effort to curate, moderate, and then eventually elevate will drive away the curators that have been running your site all these years.

It's already clear that the company doesn't care what we think when running these side experiments but please, please stop trying to ruin the traditional Q&A format. If you kill it the site dies. Full stop.

6
  • 5
    with the exception of the proposal that Q&A must first go through the Workspace, I don't see how what's proposed changes "traditional" Q&A or replaces it (much less "ruins"). if things could get added to the "Archive" (I don't like the term- I'd have chosen "library" or "knowledgebase") without first going through the "Worskpace" (which I suggest enabling), then as I understand it, the "Archive" would just be the current placeholder name for what exists today as "traditional" Q&A. Commented Dec 16, 2025 at 23:57
  • 2
    @starball Agreed about "Archive", sounds like some dingy space of dust and cobwebs that PhD students need to crawl through to mine for thesis content rather than Empowering Technologists and Their AI Sycophants For the Future To Infinity And Beyond or whatever it is we're supposed to be after these days. Commented Dec 17, 2025 at 19:34
  • 1
    coming back to my previous comment, somehow I missed the glaring point in the TL;DR that proposes removal of closure as a mechanism... in my defense, other than the TL;DR, I read the whole post and didn't pick up that point clearly, so I don't think it's totally on me. Commented Dec 17, 2025 at 22:45
  • @starball - So it’s even worse then I thought it was, wonderful! Commented Dec 17, 2025 at 23:22
  • @SecurityHound well, it is very much a proposal, and intended to solicit discussion about the proposal or even problem statement / constructive critique / other ideas about addressing the problem statement or reframes of the problem statement. Commented Dec 17, 2025 at 23:26
  • 6
    @starball - The last discussion with regards to every recent experiment has been basically ignored, and every prediction made by the community, came true to some extent Commented Dec 17, 2025 at 23:46
31

I've picked and chosen some points I wanted to address here. The short of it, however, is that there's a lot of I disagree with here, much that I have concerns over, and little I agree with. Some of those thoughts of mine are below:


The opinion experiment revealed that users aren't looking for subjective debates; rather, they most often seek detailed explanations. To capture this value, we propose a radical shift: stop closing questions and introduce a new curation model

Yes, it has been known for a long time that users value detailed explanations. I've mentioned many times, including in a call with yourself, that the answers I find the most helpful are the ones where the question is well asked, so I can confirm it's the same issue I am encountering, and where the answer(s) explain their solutions, so I can learn from that solution and adapt it to my own needs. Not closing questions doesn't help this; questions are closed so that they can be improved before they receive answers. Answers to unclear or vague questions result in guessed or vague answers; those are not going to contain detailed explanations because they inherently can't or the answer is going to be too wide in scope.

After running the opinion-based question experiment for just about two months, we know it's still raining upstream, and we have water sitting in a reservoir just behind the dam.

You're not wrong here, thank you for confirming this, but actions speak louder than words.

Are closures for “opinion-based” reasons going down and are we seeing more “opinion” questions?

In short, yes. Though this should not surprise anyone, given that they can’t be closed and there is no real, defined set of rules. Anyone poking around on SEDE can see that question closures are continuing to decline.

I'll be blunt, but I'm not sure what you're trying to state here. Are you talking about the number of "traditional" based questions being closed as opinion based is going down, or that the Opinion-Based questions aren't getting closed? The latter cannot be closed, and the former should (hopefully) be posted as an opinion-based question, so of course the number is going down; you've forced it. This is, with respect, some what of a nonsense statistics; it tells you/us nothing.

Yes. Opinion-based questions are getting about three times more replies than regular questions (answers + comments). As of this writing, there are 1600+ opinion-based questions, 78% of which have at least one reply. ... By our count about 85 replies were about telling the asker that their questions should have been asked as traditional Q&A (which is less than 2% of overall replies). This however isn't an easy thing to track, so there may be a few unaccounted for.

As a mod, many of the responses I've seen have been to tell users they've posted things in the wrong place. With no curation on these types of questions, this makes it impossible to know if the question received responses because it was good or because people are trying to tell the user that they should have posted else where.

Not to mention that out of the 4 types (we're still missing "How To?"), 3 go to the opinion-based questions, so it's a little unsurprising to see more volume when 75% of paths take you to the new question type. It's not a level playing field. Without curation, or comments, I can't take the above at anything else but somewhat meaningless KPIs, as it wasn't a fair comparison.

Ironically, we found that the “Troubleshooting” type we introduced as an option during the experiment was almost entirely categorizable under our new implementation category: users trying to get bugs fixed.

Admittedly, I'm, not sure what else you would expect to see from a troubleshooting problem; the category strongly suggests that fixing bugs is what it is aimed at. Like mentioned, many times, the missing "How To" category would cover the majority of other "implementation" questions, but I've frequently seen those end up as "Advice" and "Best Practice" questions, which they simply aren't.

The Workshop would be the single entry point for all questions and utilize a threaded reply structure with improved notifications rather than the strict Q&A format. This would allow for messier collaboration.

So how do questions get "out" of the Workshop? If this is just the Staging Ground why not just use the Staging Ground? How does it differ? Threaded responses for answers are AWFUL. I hate when I end up Reddit, or similar sites, and I have to pick through 15 replies between an OP and answerer to build a complete answer from those comments. It's time consuming and I rarely get the result I want. Answers, as they exist now, give me a complete answer in one space; no back and forth between the user with additional statements like "But what if it's a Saturday and the wind is blowing in the other direction, while my cat is locked outside in the snow?"

Expert Endorsements (a working title), a unique vote reserved for some community members, would be a validation of the question’s future value from the established curators on Stack Overflow. It would be the opposite of a close flag.

Not to seem silly, but we already have an opposite to a close vote; the reopen vote.

Getting one means you have created content that is the cream of the crop; it means your question can be found easily and that it should be surfaced to future searchers and learners, because it's good.

That already exists in terms of votes; content with a higher amount of votes has been evaluated by the community (not a single "expert") as good. Why not trust all those people, rather than 1 person?

I can also see avenues for abuse of such a feature. Mods, would need tooling to identify that. How it could be abused is likely a better discussion for another time.

how do we visually protect The Archive when content is promoted from The Workshop? If a discussion page in The Workshop looks like a standard forum thread, we risk losing the prestige and discoverability of the best answer. We cannot let the harvest be buried by the chaff.

Simply put; the question should never have been answered in "The Workshop". If it was, it was answered in the wrong place, just like if someone answered a question in the staging ground in the comments; those comments are "lost" when it's migrated. Answering questions in a threaded discussion environment is not what people using Stack Overflow to find answers want to see, because it doesn't enable them to find the answer easily; it ends up spaced out of many comments, and loses a significant amount of value. With the threaded style, there's no drive for the author of the many "answers" to consolidate their first "answer" into a complete answer. What you have aren't answers any more, they're long-form comments.

  1. How does a discussion in The Workshop become an artifact in The Archive? Is this a physical migration (moving the Question and its best Answer to a new page) or a visual transformation (signaling high quality within the existing thread)? What are the mechanics of promoting a conversational chain in The Workshop into a standalone, Endorsed Artifact in The Archive? For example, if a “messy” Workshop question receives a brilliant answer, does the question need to be pruned before it enters The Archive, or can the answer stand on its own?

As the answers shouldn't be in the Workshop, there's no need to migrate anything. The "Archive" (which isn't a good name), is where the answers are, not the discussion area. What defines the "best" answer as well, votes? That can easily be gamed; FGITW anyone? The answer with the most votes is not always the "best"; especially if one is older than the other and things have moved on. You're doing more harm by leaving those answers (that, again, should never have been in the "staging" environment) behind.

Who should have the power to ‘Endorse’ a question or answer? Should it be reserved for Gold Badge holders, elected moderators, or a new tier of subject matter experts?

The community; that's what we already do. As both a gold badge holder and an elected moderator, I don't want to be a sole arbitrator of what is "good" for a post. It ends up with clear conflicts of interest. For example, why would I say someone else's answer is better than mine? I, personally, would almost certainly be happy to, however, there are plenty of users out there that don't use their gold badge for "good", or at all, and will repeatedly answer the same duplicate over and over; those people could easily abuse their badge to promote their answer(s) over everyone else.

As mods, as we have stated to the community many times, we are not SMEs in all domains. Relying on a mod who in an SME in a subject to mark an answer as "good" is also not a task I want as a mod; it's not what I wanted to achieve when I put my nomination in.

On a discussion page in The Workshop, when a curator gives an Expert Endorsement, how should it look? If the rest of the page utilizes a threaded discussion format, does the Endorsed Answer get "pinned" to the top? Does it get a gold frame? Should the classic answer styling be reserved exclusively for Endorsed Content to signal its status as an artifact on The Archive? And crucially, how do we allow users to toggle between the “messy” collaborative view and the clean “curated” view?

Answers shouldn't be part of the discussion; they should be an answer. Leave the comments as threaded, and the answers as answers. If you want an answer endorsed, we have a similar feature already: bounties. Why does it need to look any different?

2
  • 10
    FWIW, we have tried “expert endorsed answers” before, with collectives, though I won’t comment on how useful that was Commented Dec 17, 2025 at 14:32
  • 3
    Thanks @user400654 , that's something I had entirely forgotten existed. I've not engaged with any of the collectives, so its not something that ever came into my line of sight. I'll leave it to others to comment on it's (lack of) success, in regards to endorsed answers, in their own answers here. Commented Dec 17, 2025 at 15:10
30

I thought about this some more, and I came to the conclusion that this would be a huge nail in the coffin for Stack Overflow. The whole proposal actually sounds very offensive to the community that built Stack Overflow.

You are basing the proposal on faulty premises. The experiment showed that there is more "water" because it was designed to brainwash us into thinking this. You tricked users into using the new feature even for questions that weren't opinion-based (which, to me, felt like it was most of them), you didn't provide required moderation tools, and you didn't allow the questions to be downvoted or closed. The whole experiment was designed to get you the results you wanted.

You treat question closure as something negative, which means the end of the question. And even if it is, it isn't meant to be that way. Closed questions are waiting for an edit from the asker! We want them to be reopened. If not enough questions get reopened, then do something to address this problem!

The fact that 50% of the newly posted questions get closed isn't our fault (at least I hope not). It's your fault! You're not trying to help new users succeed on the site. You're not providing them help in asking questions, in understanding the site, and in searching for existing duplicates. You made some good improvements, such as the staging ground, but you refuse to utilise it more. You tried to improve the asking wizard, but you ignored our feedback. No wonder so many new questions are unsuitable.

Stack Overflow is dying because you are killing it!


Here are some ideas you could try instead:

  1. Push more questions into Staging Ground. Make it more discoverable, too. Maybe show an indication when questions in followed tags are pending in SG.
  2. Implement some helpful QOL improvements to the question asking process. I don't mind if you use LLMs to scan the question and identify potential issues with it before it gets posted. You can tell the user when they need to remove noise, fix grammar, write a better title, etc.
  3. Reduce the question ban time. We don't need to be so harsh anymore. We can reduce it to a week or a month ban. Questions from such users should definitely land in Staging Ground.
  4. Make reopening more frictionless. Perhaps notify close voters when the question they closed gets reopen votes. Highlight questions with reopen votes in the question list. Bump the question when it receives a reopen vote.
  5. For moderators, implement tools that would enable us to investigate closures en masse. We have no easy way to find which questions were closed for which reason. It's really difficult to investigate invalid close votes.
15
  • 7
    Staging Ground questions definitely need to be shown all the time for all users in the question list, and possibly even by the time question is edited and not posted until it exist Staging Ground. That would bring more eyes to such questions and give more helpful suggestions from the actual experts in the subject topic. And all questions from new users and those who are question banned need to land into Staging Ground. I am not worried about lack of reviewers because if people want to answer questions they would have to do reviews. Commented Dec 17, 2025 at 20:11
  • 8
    I'm wondering why all new questions by new/inexperienced users don't go through SG. SG already exists and is effectively "the workshop"... no need to re-invent the wheel, we already got one! Commented Dec 17, 2025 at 23:06
  • I am fine with switching the question rate limit to new questions going to SG, but the moderation tool within SG needs to improve, it should possible to downvote bad questions in the SG Commented Dec 17, 2025 at 23:33
  • 2
    @DrewReese note that SG currently has a non-technical "constraint" that it's not intended to be a place for helping answer the question / solve the problem, but rather a place to workshop the question itself. that could be changed though, and no immediate reason comes to mind for me why not, other than people might feel less incentive to do "actual" Q&A, but there may be some pros to that con (I see a lot of questions that I just don't think have long-term value, where their views over the years seem to back that up). Commented Dec 18, 2025 at 0:23
  • @starball "it's not intended to be a place for helping answer the question / solve the problem, but rather a place to workshop the question itself." - OFC it's not, that's kind of the point of the SG, to whip a post into publishable state so it can be answered. Though I see what you mean now. I have myself occasionally provided trivial answers/solutions in SG posts that I know full well will fail miserably if published. Commented Dec 18, 2025 at 0:54
  • 2
    @DrewReese/starball this is literally why instead of SG, i want SG's curation tools on Q&A. SG's tools are perfect for workshoping questions into shape, but it gets in the way of the user getting the help they need. I don't understand why we can't allow both processes to happen simultaneously, most of the "issues" i can think of, such as "don't invalidate existing answers" are policies that don't actually apply to this scenario. If the question isn't "complete" yet and someone answers wrongly due to missing information they took a guess at, they just answered wrongly. Commented Dec 18, 2025 at 7:49
  • No. 2: use LLMs to scan the question and identify potential issues with it - I have a strong suspicion that a lot of suggested edits do exactly this. Do it in the dialog and we get rid of the overhead Commented Dec 18, 2025 at 8:33
  • @DalijaPrasnikar "Staging Ground questions definitely need to be shown all the time for all users" - so you're proposing to remove the opt-out setting that I deliberately enabled to not see that crap? If that were to happen, I might just stop visiting the site entirely. Commented Dec 18, 2025 at 9:17
  • 2
    @l4mpi What is the difference in seeing SG questions comparing to regular questions? Without SG those questions would end up directly on site. SG was meant to help new users learn about how to ask good questions, we continuously asked for better onboarding and now when we have it, it should be used. You are free to ignore and not open such questions, but it makes zero sense for not showing it to all users. This is not some experimental stuff that was forced upon us. This is what we asked for, Commented Dec 18, 2025 at 9:55
  • @DalijaPrasnikar I'm not interested in personally "onboarding" users by conversing with them in SG until their question is OK; and as other curation options (dv/cv) are not available I don't want to see SG posts. I want to deal with users who at least have basic compentency in the topic they are asking about (aka minimal required understanding - you might remember that CR). This includes not wanting to deal with people who fail to understand the SO QA model. And I want to note I personally never asked for "better onboarding", I asked for higher quality standards for more than a decade. Commented Dec 18, 2025 at 10:21
  • @l4mpi You cannot get higher quality standards without teaching users what is expected of them. Not all SG questions are crap. Those that are not can be directly published on main site and answered. If some question is crap, you can skip it. You don't have to interact with it in any way if you don't want to. You can also mark questions as off-topic (asked on wrong site) or duplicate which are equivalent to casting close votes and for those you don't have to leave any other comment nor help user in any other way. Commented Dec 18, 2025 at 11:46
  • @DalijaPrasnikar disagree. Nobody needed to teach me what was expected of me; I learned this by observation and then jumped straight into answering questions. My first question was pretty well received even though it was about a topic I had little experience with. The horse has bolted a decade ago but we could have very well gotten higher quality standards by being more elitist instead of more welcoming (and yes that might have led to less "engagement" and content over the last decade and might not have resulted in an $1.8b valuation, but I care about quality first and foremost). Commented Dec 18, 2025 at 12:40
  • @DalijaPrasnikar and re "not everything in SG is crap, you can skip it" - see this comment from yesterday. I am not interested to wade through crap to find useful things, so if SE allows me to filter out some of the content that has a very high probability of being crap and also only allows very restricted curation interactions then of course I am going to enable that filter. Commented Dec 18, 2025 at 12:43
  • 3
    @l4mpi Again, those questions are not crap. You are inventing a problem that does not exist as such questions would otherwise land directly on main site. If you are not interested in interaction nobody forces you to open such questions. But they should be visible because that is the only way to push more questions into SG and actually reduce amount of crap questions which don't land in SG. Since the last answer you posted in 2023, it is not like seeing all those questions would somehow prevent you from finding good questions you can answer. Commented Dec 18, 2025 at 13:38
  • 1
    @DalijaPrasnikar "Again, those questions are not crap" - that may be YOUR opinion and experience, but my experience is very different. I did have SG enabled for a few days and then turned it off because almost everything I saw was not something I wanted to see. And yes, I stopped answering except for very rare cases as a) it is rare to find a good answerable question in my fav tags and b) I heavily disagree with the course SE has taken over the last decade and thus reduced my contributions. I mainly curate a bit during downtime at work to slow the rate of SO becoming a trash heap. Commented Dec 18, 2025 at 15:26
29

The Workshop would be the single entry point for all questions and utilize a threaded reply structure with improved notifications rather than the strict Q&A format. This would allow for messier collaboration. Incomplete or unclear questions could be discussed back and forth without penalty. We want to keep the conversation open while the water soaks in. The purpose is to see what value these posts might hold for the future.

I seriously don't see any value in this. If you want useful artifacts it needs to be clear what is a question and what are answers. Everything else are comments.

With such threaded responses and going back and forth in separate replies, you will not be able to get any useful long term artifact. Nobody will go and extract that into something useful and all participants who contributed something meaningful would have to do that.

If you want people to be able to ask poor questions and have them answered, the format is not the problem. The closing process is what prevents people from writing answers and sometimes such questions are resolved through comments. Not much different from the current process. Starting with a mess will never yield good results. On top of that you will seriously drive away the remaining experts who are still hanging around and answering here.

If you are worried about amount of asked questions then maybe you can give users with question ban more opportunity to ask a new question, maybe you can allow them to ask one question an month instead of every six months. If that goes well, you can maybe even reduce that further. But whatever you do, don't change the Q/A format.

Also the quality of responses will go down, because it is a different thing when you are writing a proper answer and different when you are just chatting with someone.

If there was anything to learn from opinion based questions it was that the format was wrong. Replies don't work. It looks like you have some activity but most of it were just comments which would commonly be deleted as no longer needed on the Q/A.

Please, don't do this. You will just waste some more time on such endeavors and you will get nowhere. If you want to broaden the scope of questions and allow some recommendations and opinion based topics, then please make those in the same format as regular Q/A. If it works for sites like Software Recommendations it will work here, too. And you will be able eventually sort out existing useful Q/A pairs which were not on topic in regular Q/A.

4
  • 14
    And for Question-banned Users, force their "new" Question to go through the 'Staging Ground', don't give them the choice... Commented Dec 16, 2025 at 19:26
  • 1
    And if you want a system that lets you ask poor questions and discuss with no long term artifact? Discord does that well enough. Commented Dec 17, 2025 at 2:06
  • 5
    @user1937198 Stack Overflow is not a Discord. The whole premise here is that we want questions and answers which can help many people over time, not just one person. Commented Dec 17, 2025 at 7:17
  • 5
    @DalijaPrasnikar I totally agree. I was more raising it as if the company tries, it will probably be worse than alternatives, and still won't get 'engagement ' Commented Dec 17, 2025 at 9:34
28

Incomplete or unclear questions could be discussed back and forth without penalty.

As somebody who primarily focuses on answering questions, this sounds unfun to me.

The original differentiator SO had was that its users tend to ask fairly high-quality questions. This in turn made answering questions interesting: you don't have to burn time hand-holding users who need help articulating their problem or wade through lower-effort and uninteresting-to-answer questions.

I've seen this dynamic play out first-hand in other online forums/subreddits -- there's no element of curation, so answerers tend to eventually burn out and disengage due to the repetition.

So while I do think there's merit to making the question-asking experience smoother via staging ground/workshop/etc, I also think it'll be important to maintain a strong element of curation or filtration to help answerers have a positive experience.

The revelation came when we analyzed the question users submitted using the “Opinion-based” option provided during the experiment. These were not just random solicitations for comments or opinions, which was a concern for both the company and the community, but a majority appeared to be questions asking for explanations.

In contrast, I do like the idea of making a push to encourage "explanation/why" type questions.

Part of the reason why I stopped participating on StackOverflow a few years ago was because as I became more experienced, I eventually got bored answering the more narrow "how do I do X" or "implementation" type questions that are common on SO. Having questions that require broader/more holistic expertise or require navigating non-trivial tradeoffs does sound much more interesting to me.

That said, I'm not sure you need this whole workshop/archive concept to support such questions -- frankly, it feels like overkill. (Or perhaps more charitably, orthogonal).

Instead, it seems to me you could accomplish a similar result more simply and quickly by:

  1. Tweaking the Stackoverflow tour, "How to ask", and other such pages make it clear "explanation" type questions are allowed and encouraged.
  2. Tweaking staging ground to provide templates for such questions.
  3. Setting up new tags and such to make it easier for interested answerers to find such questions.
  4. Working with meta to develop best practices and guidelines on how to answer and moderate such questions. For example, I imagine it would be a good idea to still expect the question-asker to do some initial research, perhaps to try defining tradeoffs/the design-space, etc... And conversely, the moderation policies should be updated to distinguish "explanation" questions from subjective/opinion-based ones.

As noted, these questions are not really subjective. So StackOverflow's pre-existing Q&A framework should still do a decent job of accommodating these questions. The most comprehensive/thoughtful/accurate answers will be voted to the top; users will accept the explanation that clicked best with them.

6
  • 3
    I assume curation will largely be the same in this "archive" space, and that if one doesn't want to interact with or see the "workspace" stuff, one doesn't need to. and yeah, I don't think the opinion-based experiment necessarily motivates this proposal, but I think there are ideas here meriting discussion / exploration. Commented Dec 17, 2025 at 5:38
  • 13
    Your desire to not "burn time hand-holding users who need help articulating their problem or wade through lower-effort and uninteresting-to-answer questions" is horribly unwelcoming, shame on you! Think of all the lost engagement and of the 1.8 billion dollars some crappy corporation that couldn't care less about you spent on this platform! (sarcasm, obviously, but it really feels like SE does not understand or does not want to understand that most experts have no desire to look at crap questions written by people who struggle to understand the absolute basics of programming...) Commented Dec 17, 2025 at 9:50
  • 1
    'I assume curation will largely be the same in this "archive" space, and that if one doesn't want to interact with or see the "workspace" stuff, one doesn't need to' -- Maybe, but both the initial ideas for mechanics and the terms themselves ("archive", "harvest") do strongly imply finality. Taking a step back, it seems 'workshop' is for answerers who enjoy teaching others to write/think/problem-solve with clarity and 'archive' is for those who enjoy curation. But what about people who most enjoy sharing knowledge and expertise? Atm the proposal doesn't really describe a flow for those folks. Commented Dec 17, 2025 at 14:34
  • 3
    'I think there are ideas here meriting discussion / exploration.' -- IMO the most interesting idea in the post is that there's demand for explanation-type questions. so I'd like to see SO explore this particular direction in more depth. That is, make SO a bigger tent by onboarding and explicitly supporting and encouraging a new category of questions. (And longer-term, keep an eye on staging-ground/workshop for any emerging new categories of questions that'd be worth creating dedicated rails for.) Commented Dec 17, 2025 at 15:26
  • 2
    @Michael0x2a: Explanation questions are already on-topic, with minimal fights about keeping them open. e.g. How does $ work in NASM, exactly? / What happens if you use the 32-bit int 0x80 Linux ABI in 64-bit code? / Why does the x86-64 / AMD64 System V ABI mandate a 16 byte stack alignment? / Why in x86-64 the virtual address are 4 bits shorter than physical (48 bits vs. 52 long)?. But yes, some SO staff seem not to have realized that. Commented Dec 20, 2025 at 11:13
  • 1
    But yeah, those questions have much more value that overly-specific debugging questions, and doing more to encourage good questions along those lines would be a good thing. I have seen a couple close votes on some such questions which aren't directly about writing programs (but that's mostly because they were about CPU architecture more than programming, the internals of how CPUs work. Not because they were about conceptual stuff about language design, although we do now have a site for that.) Commented Dec 20, 2025 at 11:15
26

If you legalise drugs, fewer people will be reported for drug use, fewer arrests will be made, reported drug use will go up, and demand will seem to increase.

You relaxed the rules and got increased engagement.

If engagement is your primary metric... Well, good job.
I can think of a few more changes to increase (short-term) engagement.

You're talking about the volume of questions we close as if it's a problem. To me, it's a result of a focus on quality. If you loosen that focus, of course you're going to close fewer questions.

One line that really drew my attention:

The median reputation for users asking these opinion questions was 10x higher than those asking troubleshooting questions (113 vs 11), and their accounts were significantly older (median age of ~ 7 years vs ~ 2.5 years).

113 rep is not significant. Especially not on accounts that have had about 7 years to collect as much. That means that the account received less than two upvotes per year.

To call these users "experienced developers (who) are hungry for these conversations" seems disingenuous. Experience doesn't come with years; it comes with action.

To me, these users don't even qualify as active members of the site.

17

Ironically, we found that the “Troubleshooting” type we introduced as an option during the experiment was almost entirely categorizable under our new implementation category: users trying to get bugs fixed.

I'd separate "my code that is supposed to do X is broken, how do I fix it?" from "I want to do X- how do I do it?". The latter form is usually more useful to future readers.


In the current curation system, curators close questions as opinion-based because they don’t fit our strict “one right answer” format. In reality, a lot of these are questions of conceptual understanding, based in facts, reasoned through collaboratively, and grounded in truth. By damming these up for years, we have incidentally been blocking meaningful attempts by people trying to gain deeper understandings and have “why” conversations that help them grow. “Opinion-based” often just meant “nuance required.”

There's a definition of "constructive subjective" questions that is blessed for regular Q&A:

Constructive subjective questions:

  • inspire answers that explain “why” and “how”
  • tend to have long, not short, answers
  • have a constructive, fair, and impartial tone
  • invite sharing experiences over opinions
  • insist that opinion be backed up with facts and references
  • are more than just mindless social fun

I've raised this a couple times and don't feel like this is getting acknowledged in the announcements about this experiment. That makes me feel confused and frustrated.


The proposal: The Workshop & The Archive

At a high level, I'm in favour of this,
(assuming it's not implying that closure doesn't exist in the "Archive" space).

I've been thinking about the idea of intentionally putting a "library/knowledgebase" space and a "helpdesk" space side by side. I know that "helpdesk" is a bit of a trigger-word to us, but I understand that to be because of the frustration and challenges that come from trying to achieve the former when people approach it only as the latter.

I think having two spaces side by side- one geared towards interactive help (like a helpdesk), and one geared towards being a library where information can be searched and referenced (long-term, reusable value)- can be an arrangement where the spaces complement each other: a helpdesk can give a concrete picture of what issues people are having today (which could motivate adding new items to the library, for future reuse), and a library/knowledgebase (ideally) provides material that can be referenced in a helpdesk, so helpers there don't have to repeat themselves at length about particular building blocks of knowledge.

I also think there's value in allowing people to choose the experience that they want. If someone wants to seek help and doesn't want to do the work of creating a post that meets Archive standards, they can choose an experience where that won't be expected of them. If someone wants to help those people and don't necessarily care to contribute to an Archive (and there are people like this), they can choose that experience. If someone just wants to contribute to the Archive, they can choose that experience. If someone wants to contribute to both, and to bridge between them, they can choose that experience.

I'd expect this to result in less tension between these different groups.

Also, speaking for myself personally, it would be a huge relief to me to be able to choose (on a case-by-case basis) an experience where I get review of a question post I draft from people with more expertise from me. There are so many domains I don't have expertise in, and there are often questions I want to ask, but have no idea what details to include, or how even to pose the question well, because I have little to no idea how things in that domain work (Ex. Linux / Ubuntu things- especially when it comes to layers which separate concerns). In those cases, I really want to have a space that is built for me to get help writing up a good question post. Maybe that can just be meta, but it would seem awkward to me to see that as the happy path for this. I wish Staging Ground existed on other sites, and I'm glad to see that you're still thinking along these lines, or leaning into those ideas.

I'm not a fan of the word "archive" though. It carries a connotation of a place where things aren't going to be referenced often... which is the whole point of a library or knowledgebase. I'd call it a library or knowledgebase.

I have some specific thoughts on mechanics for a helpdesk space, such as how to rate limit things, no web indexing (it's the library's goal to be searchable), and automatic item deletion after a year or so.


The Workshop would be the single entry point for all questions

There aren't privilege barriers today to posting Q&A, and I don't see a reason to add them. If one wants to get review of a new Archive entry they'd like to propose, then sure, they can use the Workspace for that, but they shouldn't be forced to.

If one person thinks an Archive Q&A isn't useful, they can just downvote. It it's out of site scope, they can just close-vote.


The Workshop would be intended to catch almost every appropriately on-topic, answerable question on Stack Overflow.

So... how does the "opinion-based content" experiment play into the definition of "on-topic and answerable"? What are the standards for subjective Q&A in the Workshop and the Archive? I'd be pleased to hear if/that the guidelines for "constructive subjective Q&A" will stay in place for the Archive.


Expert Endorsements (a working title), a unique vote reserved for some community members, would be a validation of the question’s future value from the established curators on Stack Overflow. It would be the opposite of a close flag. Getting one means you have created content that is the cream of the crop; it means your question can be found easily and that it should be surfaced to future searchers and learners, because it's good.

Instead of voting to close, you’d vote to elevate. The system wouldn’t be built to hide or remove; it would be optimized to highlight what's worth looking at. Curators would not need to classify and categorize every single incoming question, only pull out the most useful posts and cultivate them for the harvest. [...]

  1. The curator: Who should have the power to ‘Endorse’ a question or answer? Should it be reserved for Gold Badge holders, elected moderators, or a new tier of subject matter experts?

I feel like this is overcomplicating process / adding an unnecessary barrier. If someone (anyone) sees a pearl in the sand, and they think they can bring it out, I think they should just do it, and that they shouldn't have to have a special role granted to them to be able to. They can (/should just be able to) just write up Q&A for the Archive and post (and then ideally add a link to the new Archive item(s) in the motivating Workspace item's comments).


This presents a critical challenge: how do we visually protect The Archive when content is promoted from The Workshop? If a discussion page in The Workshop looks like a standard forum thread, we risk losing the prestige and discoverability of the best answer.

  1. The mechanics: How does a discussion in The Workshop become an artifact in The Archive?

As I proposed above, just let the act of creating Q&A in the Archive work as it does with posting Q&A today. Writing good Q&A is work, and I say let that work be rewarded to whoever does it. If someone is really concerned that some solution they provide in the Workspace will be written up by someone else in an answer post to a new Archive Q&A (and that someone else will get the rep/credit for it), then... they can just be the person who writes up that new Archive Q&A, and use it to help in the Workspace, instead of just participating in the Workspace.

3
  • 3
    isnt' this just discussions 1.0, except we eliminate Q&A as a starting point? Commented Dec 17, 2025 at 1:23
  • 1
    @user400654 from what I remember, Discussions had a goal to be long-term, searchable content, with wider scope, goals, and fewer rules / quality expectations compared to Q&A. Commented Dec 17, 2025 at 1:29
  • 5
    I actually really like your idea of two complementary spaces for the two main 'goals' of people asking questions on SO. I personally have fallen into both sides as an answerer. Sometimes I want to, and have the time to create a lasting, good answer to a question which I found interesting. But sometimes I just have a few minutes and want that bit of warm fuzzies from helping someone get unstuck. Being able to actively make the decision of what level of effort is required by me sounds like a nice idea. I also really like the idea of the 'helpdesk' questions not being indexed. Commented Dec 17, 2025 at 16:24
17

This tells us that experienced developers are hungry for these conversations.

No

Or at least not necessarily. My interpretation is that there's simply a correlation between:

  • People who carefully select a fitting category in the "ask question" dialog
  • People who earn more reputation on Stackoverflow

Or vice versa: if you're too lazy to select anything but the default in the UI, you're less likely to have high reputation.

In general, I like that you want to ease the conflict between curators and askers and that you're bringing in new ideas.

However, my personal advice would be start slowly. Make small steps. Implement a few medium sized features that are met with enthusiasm by askers and curators. Seeing your recent track record, this current vision is too heavy, it may easily break your back.

I;d love if we made duplicates fun or at least enjoyable. People love to quote themselves, so here's what I said a year ago:

Finding a duplicate for someone -> should feel like getting your answer accepted. You found the correct solution.
Having your question flagged/closed as duplicate -> should feel like getting a perfect instant answer. It most often is.

14

How about making another web-site to try such revolutionary water ideas?

what it means to run out of water.

On the other side, in russian language "too much water" (in the book) phrase is understood as "too much of fluff/filler" (too wordy). This is exactly what we do not want StackOverflow to be: instead of a concise Q&A with only information to the point you are proposing to allow all kind of stuff to stay.

book with too much water

The statistic:

Opinion-based questions are getting about three times more replies than regular questions

tells us, that if you mark certain topic as a place without the rules, then the users are more kin to post there "something" (not necessarily a good content).

I personally don't visit advice-topics, who am I to advice? But I'll gladly answer a specific question if I know the answer and there is none yet given. I am doing it not just because I can or for reputation (after certain point reputation doesn't matter anymore), but because I think about future visitors. Few times I got a quick help myself from my own posts!

I was amazed how good StackOverflow is (was?) at giving a concise answer to a specific problem. You know web-site which has lots of answers, but I never visit it directly? It's reddit. Don't become reddit please, there is already one.

With the rise of AI the users are going to visit web-sites (all web-sites) less often. The "googling" era is obviously at decline. Will attempts to "just get some users" and to "show more ads" as of recent make a significant difference to income is a question. But the new content will very clearly suffer. Your choice.

13

I really like that this proposal seems to come more early in the development process compared to other such changes lately and it seems to explain more than the average initiative. On the other hand I really don't like that the solution seems to be to put so much extra burden on curators, demolish the Q&A model over Reddit style threaded replies and in general doesn't give that much emphasis on content quality.

It seems to me to be something like: hey let's be like Reddit and additionally we can also be like Wikipedia in a two level system.

But I believe that:

  • Reddit like threaded replies are not an efficient way for gathering knowledge
  • Distillation of the knowledge from such conversations is a non-trivial, extremely time-consuming act
  • While it's maybe optimal in the sense of (no downvotes, no close votes, I get my half-baked question answered and improved and elevated for free) no friction experience for the asker, it's a far worse experience for the curators and answerers. It would quickly overload the experts.

This would only make sense if the shortage is really only on the side of question askers and not also on the side of answerers or curators.

Why can't the elevation vote not simply be traditional upvotes? Why can't the workshop not simply be the staging ground for everyone (possibly extended)?

You basically ask us to do everything twice: like please give everyone a quick answer to whatever they write, and then polish it and answer it again, but even better. Be a relentless live debugger/help desk and a book writer. But who actually has time for that? I may be wrong, but I think the one cardinal problem is simply that we don't have the amount of free labor needed to make this paradise a reality. (AI has, we haven't. That's simply how it is.)

I like Shog9's wisdom from the past, but I don't like his analogies. Analogies always leave important parts of reality out and can easily be changed into anything else. Just replace water with garbage and the picture might become more clear.

I personally would have done this instead: I would have decoupled questions from answers and focused on finding and linking existing content to new questions (maybe without much voting). My premise would be that most new questions are duplicates, some aren't and curation/expert time is limited while being friendly to everyone doesn't cost a thing. And I would have also concentrated on improving existing content. But this path you explicitly do not seem to follow.

It seems that the answer to AI chatbots seems to be that you can write anything and human experts will give you tons of immediate help. I'm really not sure we can beat AI at its own game, especially not without high quality content.

We will surely learn from this big experiment if Reddit like threaded replies are useful for knowledge generation. Unfortunately that is your weakest point. You don't measure quality/knowledge generation. You only measure engagement. This engagement might just be meaningless for the purpose of the library creation. It's absolutely unclear and my impression was that people are not getting help with the opinion questions (and there wasn't a single positive example in the Q&A asking for positive or negative examples). That means if, if you try that (and I think you will), you will have no chance of seeing if it goes wrong. You go into it blind.

In the end we might just only collect a pile of garbage (questions didn't get closed for fun in the past). But it's worthwhile to find out. New usage of SO is so low now, one can as well take more extreme measures and bet the whole house, there isn't much to lose.

Also, the step size of your experimental updates is much too large. Instead of for example "just" abolish close votes and downvotes you also want to change the UI and the roles of people. The probability that this altogether is positive is very small and would require you to rely on luck to get everything right. If you fail you will not know what was the cause. Believe it or not, but you learn more from doing smaller steps at a time.

P.S.: I slept over it and now I think that the vision is basically a human super chatbot knowledge machine. A chatbot that can answer every question without any problems or reservations, extracts knowledge on the go and is powered by an endless supply of free human labor. Problem is that it still competes with AI chatbots, they might still be faster and cheaper and this endeavor might be futile. Additionally it's not clear if the knowledge base really needs the chatbot output to distil knowledge. Instead it might be more efficient to completely ignore that and start generating knowledge by other means, whatever they are.

13

I might be missing something, but I do not see what problem this is meant to solve that cannot already be solved through less extreme measures. Anyone can already endorse useful questions by upvoting them, we do not really need a separate "Archive" for them. The "Workshop" already exists in the form of closed questions. They can be submitted for review and reopened when fit. The workflow could be made friendlier, sure. But a separate zone for all new questions as I understand this just seems like unnecessary complexity and friction.

Incomplete or unclear questions could be discussed back and forth without penalty. We want to keep the conversation open while the water soaks in. The purpose is to see what value these posts might hold for the future.

I do see the value in revising incomplete/unclear questions without penalty. The current workflow is kind of needlessly hostile. For example, a closed question is pretty much a dead end unless you are lucky enough to have enough people cast reopen votes, which is hard for low-visibility questions. And getting a question reviewed after the 9-day deletion window is very hard without moderator assistance.

As such, questions should only enter a "Workshop"-like state once closed, then they can be revised without penalty until reopened. Revise the workflow for only questions deemed as needing improvement, not all questions.

In the current curation system, curators close questions as opinion-based because they don’t fit our strict "one right answer" format. In reality, a lot of these are questions of conceptual understanding, based in facts, reasoned through collaboratively, and grounded in truth. By damming these up for years, we have incidentally been blocking meaningful attempts by people trying to gain deeper understandings and have "why" conversations that help them grow. "Opinion-based" often just meant "nuance required."

Then we should revisit the wording and usage of "Opinion-based." This reason used to be known as "Primarily opinion-based," and before that, "Not a real question" or "Not constructive" were used to roughly mean too opinion-based. The help center does not suggest all opinion-based (subjective) questions are off-topic so long as they are constructive:

Some subjective questions are allowed, but "subjective" does not mean "anything goes". All subjective questions are expected to be constructive. What does that mean? Constructive subjective questions: inspire answers that explain "why" and "how" [...]

The "Opinion-based" close reason could be revised to exclude constructive questions, even if they are subjective. In some ways I feel that the old close reasons might have done a better job at this.

But we do not need to "remove closing" just to address these issues.

7

What we know for sure from 15 years of SO moderation is this:

  • Up/down-voting as a means of moderation does not work.

    Up-votes tend to indicate that a post has been around a lot, that it frequently pops up in links/search results, or the it is about "curious oddity" topics. Not necessarily that it is good. Up-votes means that most "canonical" posts get stuck with a lot of crappy answers that we can neither improve nor get rid of.

    There is no "library of knowledge" here. There's rather "oh there's some good books in shelf Z, cramped in between the cheap detective stories, the sensational tabloid newspapers, the very old and incorrect books and various unrelated fiction. Read them all and you might figure out which are the good ones".

    Down-votes create a whole lot of needless drama and conflict. Public shaming as a means to achieve site moderation does kind of work, but basing the whole site on it wasn't a good idea. Because no positive reaction comes out of it, no will to learn and improve. Humans respond poorly to critique given in public no matter how valid it is. As I keep saying, the very basics of management is to give praise loudly in public, but to give critique discreetly in private. Not understanding such very basics of human behavior is probably what you get when tech nerds with lots of technical knowledge and no HR experience build a site. Not a lot of empathy or awareness of how humans behave.

  • Reputation as means to indicate moderator suitability does not work.

    There is zero correlation between technical domain expertise and moderator suitability. Everyone knows this and everyone keeps pointing it out, yet the system is still there. We have lots of high rep users who are clearly not suitable for moderation, where there's users who are both suitable and willing to do more user moderation tasks, but are held back since they lack privileges. (Personally I have some 200k+ rep and 39 out of 40 of the "moderator criteria" used for mod elections, but I don't think I would make a good moderator at all.)

    Due to decreased site activity, reputation will get even harder to come by, making the model even more flawed.

  • Having users with zero interest/suitability for moderation doing moderation does not work.

    These kind of users just want low quality content gone. They don't care how, they don't care about other users, they don't care about the long term, they just "down vote, close vote and move on". We even encourage that. But this builds a bad culture over time, where there is a constant site scope creep driven by the most pedantic user moderators, never by the most lax ones. Overall on SE, site scopes tend to get increasingly narrow, but rarely ever wider. Eventually you end up with sites that a very large part of all content gets closed and the few self-absorbed, pedantic users that remain even think that's a good thing.

To keep pretending that the above flawed systems aren't flawed is to keep losing users. There is no easy fix to any of it - as much as everyone keeps trying, we are still stuck with the bad site culture and it is very hard to change. Bad as in lots of strong believers in the above flawed systems. (Just watch as this answer is no doubt getting down-voted into meta oblivion.)

I have no idea how to fix the site and judging by the previous 20+ bumbling experiments that have been carried out in attempts to fix it, no one else here does either. Creating a new, better site and start fresh is probably the only sensible thing to do.

2
  • I agree with the observation, but that doesn't necessarily mean that the idea in this question is a better alternative. It's a kind of extreme change to the opposite. Before doing that we could for example simply abolish close votes and clamp scores to 0 from below. That was also never tried. I don't understand how "no moderation" would be the answer to "moderation is faulty". I think there are at least some questions that cannot be answered meaningfully. And the relative score of answers typically indicates well which ones are more useful than others. Commented Dec 18, 2025 at 12:11
  • 9
    @NoDataDumpNoContribution The solution is probably, as suggested by this answer: moderation should be done by those who care about moderation and are good at it. And no, score does not necessarily mean that the answer is good or useful. That's what you like to think that voting should mean but it does not work that way in practice. There are countless of 50+ upvoted posts on the site that are low quality or just plain incorrect. Voting does not work as intended, period. Commented Dec 18, 2025 at 12:38
6

I'd be willing to prototype this on Generative AI Stack Exchange: I was already planning to experiment with this kind of thing on that site. It's a good test-bed, because the ideas you've described fit the kinds of questions that get asked there.

I don't think this is something you should prototype on Stack Overflow, because the ideas you've described don't fit the kinds of questions that get asked here.

5

Overall thoughts: Ambitious but maybe promising. The experiment's 16% doesn't impress me without a much more nuanced breakdown. But for the Workshop/Archive: the community puts so many hours of work into trying to curate, so I'm happy to hear about anything which might improve the process. A couple concerns:

  • how much work this might be to bring to fruition...
  • encouraging a lot of separate Workshop discussions might fragment the knowledge

No link-only answers

Except for links to our Q&A/Archive/SE network, right?


#1/#3: Definitely need a highly readable Q&A page for the Archive. Stack Overflow has always been decent at this, with room for improvement. The recent/upcoming changes (comment UI, experimental questions, native ads) have made the pages worse—less readable—not better.

I think my preference would be an Archive which is a separate page, with most of the Workshop chit chat gone. If we have useful answers for people, let's let people come and get them. They could then optionally click to view the Workshop page, where they might learn more nuance or history of the discussion.

1
  • 2
    I don't see fragmentation as a concern for a "workshop" space, as reusability and searchability (in my mind) are goals of a library (or here, "archive"- I'm not a big fan of the term here). I'd even encourage workshop stuff to be noindex, and maybe even auto-delete after a year or so of little to no activity. Commented Dec 17, 2025 at 1:01
5

I don't like the idea of there being two different places, because there can be only one default list of results on the question page, on the home page, on any page, and whatever isn't the default will be ignored. We've seen this with collectives, with discussions, with articles, with chat, if it isn't at the place people go to to find content, they aren't going to interact with it.

I'm also 100% against creating a separate interface to support these "opinion based" discussions that we all know aren't all opinion based, because it leads to confusion. From people not understanding how to use a given interface, to people having the wrong expectations when they use it, or not knowing which one is the correct one to use, no amount of ai generated "guidance" on the right side is going to solve this. We need one, clean, interface that serves the needs of the community with clear expectations of what we expect from askers. We have Q&A, it has worked well for us all these years, I see no reason to go from it to discussions. It is more than capable of serving users looking for more of a discussion with the new(ish) threaded comments, and if a user feels they have an authoritative answer to provide they can. It's effectively your "workshop" vs "archive" idea built into one step. The only thing holding back this interface from serving users the way this site was intended to operate is the tools we have for curation.

The friction we have at the moment that causes all of this toxicity around asking and answering is due to the tools we have for indicating the quality/usefulness of questions/answers, how we deal with questions that need some additional information to be adequate potential additions to the knowledgebase, and how those tools have resulted in community created guidelines that go against the natural way people get and receive help (such as having a discussion in comments to flesh out what the user is looking for!) Q&A itself isn't the problem here, Q&A isn't inherently a solution that can't support the questions that aren't quite ready for a complete answer yet. We simply don't allow that collaboration to occur in a way that is useful and fair to the person looking for help.


What I propose as an alternative solution is to change the effects that close votes have on questions. If a question "Needs details or clarity," the closer who is voting for that reason gets to choose a sub reason (controlled by the community mods for each community) to indicate what details or clarity is missing. Is it missing a code sample? A screenshot of something? an expected outcome? Make it clear to everyone who comes by what it is this question is missing so that it can be fixed. However, the second part of this change is this vote should no longer prevent the question from being answered. If the question gets an adequate answer and people find the answer useful, mission accomplished. Stop getting stuck on this "but it didn't have an MRE!" madness if someone had the expertise to solve the problem without it. The goal should be leading the asker to an answer, not finding the right close reason bin to dump it into. If the question is just the umpteenth question about how to print text on the page with java it's not going to interfere with actual useful content. We don't need to get in the way of them getting help, tune the low quality deletion system to dispose of the question due to it's lack of value. The elevation of useful content should happen organically, rather than being controlled by curators.

As far as downvotes go... I don't see a good solution to them. I don't want to see them go, but i don't see this network ever gaining back it's popularity with the way voting works today. It's not about how much rep you get from votes... it's about the negative feeling of knowing someone "didn't like" what you created. I don't think we should get rid of voting or force voting to provide a reason or any of that, rather, maybe upvotes and downvotes shouldn't be combined into a visible score and shouldn't be so prominently displayed. Maybe downvotes shouldn't cause a rep loss or cause an immediate risk of receiving a 6 month "question ban" that most users never climb out of. Maybe I can be convinced we simply don't need them on questions once the new close system more or less covers all the reason's we'd cast a downvote on a question.

Put another way, I'd rather we build this, "knowledgebase," we are trying to build/maintain here such that the users who come here looking for help can use it as the helpdesk they expect it to be without it getting in the way of what we are trying to accomplish. Ask Questions, Get Answers, No Distractions. The useful content can ideally be identified by how often people visit it and hopefully upvote it, rather than putting it on curators to sift through the mess and pick/choose what goes in. Unfortunately, I don't know how feasible this is due to ai summaries and solutions like AI Assist that don't give any incentive to visit and pay into the feedback loop. Most importantly, however, we can't continue to just push everyone away and keep people from helping each other over whether or not a given question fits in the knowledgebase and expect to continue to have a relevant knowledgebase for years to come. Something has to give.

3
  • 4
    The only users who are getting question rate limited, are users who ignore the community feedback they do receive, and ask multiple questions that are poorly received. It’s not difficult to ask a good question with an adequate amount of information to receive an answer. Commented Dec 17, 2025 at 23:28
  • @SecurityHound and users who weren't provided the feedback they needed to fix whatever problem they apparently thrust upon the poor network. please. People are getting rate limited because they dared to ask other developers for help, and failed to read our minds when we closed their question as "needs more info" and downvoted it to oblivion. Commented Dec 18, 2025 at 7:55
  • 6
    I have seen hundreds of users come to meta SO to asked about the question ban, and I have never seen a case that wasn’t caused by the users own behavior, as an example asking a question then deleting it. They all received feedback, In almost all cases, they ignored it. Commented Dec 18, 2025 at 16:05
1

Do it the other way around. Send closed questions to a Reddit-like workshop, where they can die without polluting the main site

5
  • But how would that help those that ask there? And how would we generate knowledge in the future? Is the solution presented here maybe a bit too simplified? Commented Dec 25, 2025 at 21:59
  • 1
    @NoDataDumpNoContribution Reddit was very helpful for many. Arguably it has some historical value too. The universally useful knowledge will be kept in well-asked questions that do not need to be closed. Just like it is now. Commented Dec 26, 2025 at 1:10
  • 1
    Sure. That's one way to look at it. Why should StackOverflow try to emulate Reddit if Reddit already exists. No strict need for that. But there are so much less new questions here now, less than 5% of the peak. How do you explain this? Universally useful knowledge is just much less than previously assumed? Or people simply don't ask will-asked questions (anymore) and it wouldn't work in any case? Commented Dec 26, 2025 at 8:57
  • @NoDataDumpNoContribution they desperately want involvement, which Reddit has. Let them have it, but not at the cost of main functionality. All questions are answered. New questions appear as new topics emerge. Commented Dec 26, 2025 at 16:27
  • "New questions appear as new topics emerge." I see. You assume that nothing is broken, so nothing needs to be fixed. It's one possible way of looking at it. If it was added to the answer it would make it even faster to understand. Commented Dec 26, 2025 at 21:30
-3

Okay, another answer, this time, even though this might not be something that I would take part in personally, I try to make it happen. Here is your new business model:

It seems clear to me that you want to become like Reddit only with a bit higher quality. Fair enough. If that was indeed the goal, how would one do it?

  • Askers can ask anything, repliers reply to that in any possible way, voters vote on it - all like in Reddit
  • Do not ask people to dig through the mud and search for diamonds. You won't find enough people anyway. The only thing that can do that without burning out is AI. Use AI to write meaningful excerpts of question and answer for questions with a score above a threshold. Train the AI to take votes into account. How you do that exactly? It's your business secret. If you do it well, you'll create value. If not, then not. That's your big task. The new core of the business is knowing how to automatically extract knowledge from a water/mud/garbage mixture.
  • Publish these excerpts to the second tier after some time. Make them editable there for trusted users. Add links between the original discussions and the edited summaries.
  • Very important: every question gets only a single summary which summarizes the question and the answer, even if there are five different solutions outlined in five different replies. The expectation would be that all sufficiently upvoted solutions are included in the summary in the order of their votes.
  • Visitors can them consume the summaries, which hopefully have a quality high enough to compete with LLM results (i.e. they may not be tailored but they explain more and are right more often).
  • Optionally add a duplicate finder system, where related questions / summaries are linked as early as possible, without limiting the ability to comment on anything endlessly.
  • State that you want to become a high-quality Reddit alternative.
  • No AI content creation by users, only by you, only in the one intermediate summarizing step.

Simplified user roles:

  • askers ask questions
  • answerers reply to questions
  • all visitors vote on content (up and down)
  • AI sumarizes vote weighted content
  • experts (distinguished answerers) edit AI created summaries, maybe if they want to can even bypass the AI part, but let's see if this route is frequented
  • visitors read summaries or ask their own question
  • no close, deletion only of spam, instead only votes and visitor counts

Anything else, especially when it puts too much burden on curators, stay away from it. More AI, also stay away from it. One crucial AI application that you must do well and one only. You don't want to fail. Make it simple (For example are question types really needed?).

And would I like it personally? Not that it matters because you surely can find a different audience elsewhere. I really like that it's more oriented towards improving content, more collaborative. And I like summarizing multiple answers. This concentrates knowledge much more. But I don't like conversations. My time is too precious to take part in them. Still, if empirically it's shown that this is the only way to get humans to reveal their problems .. what use is a system that nobody uses. I definitely don't believe in that all questions have been asked already, so making a system where people ask them can't be too bad. But I myself would not volunteer for it probably.

The alternative is that everyone uses only chatbots in the future and amount of publicly available knowledge will shrink. Maybe generating new knowledge will actually become harder not easier.

9
  • 2
    my initial reaction to the AI summarization idea is to question why. I know you said because people won't want to dig through the mud, but is AI a good fit for that problem? why not just present things based on votes to people interested in building the library/knowledgebase artifacts? Commented Dec 22, 2025 at 11:10
  • 2
    @starball Why? For the visitors that only seek knowledge, no fluff. Even filtering would still leave fluff inside. Is AI suited for this task? Yes, that's what LLM are built for. With their attention mechanism they can separate unimportant from important things across replies and summarize content effectively. They don't need to generate new knowledge and they may make errors (like everyone), but they don't tire and scale well. To summarize: Conversations are best for askers, concise summaries are best for visitors, experts can fill this gap but burn out doing so, AI has to come to the rescue. Commented Dec 22, 2025 at 13:06
  • 3
    ... and experts are only giving the final touch. Probably depends on how good the part in the middle really is. The alternative, only showing some highly voted replies is what Reddit is doing already and I think it's still not condensed enough. But that would also be an alternative, however even closer to the actual Reddit. Commented Dec 22, 2025 at 13:09
  • "Use AI to write meaningful excerpts of question and answer for questions with a score above a threshold." If there is anything I can take out from my experience as a curator and moderator involved in AI moderation is that you cannot use AI for writing summaries. You cannot use AI for writing up anything. Period. It will be full of flaws and inaccuracies. I want knowledge written in whole and organized by people, not some chatbot. Using AI to find knowledge and to point to the original, yes that can work. But without summaries or anything like that. Commented Dec 25, 2025 at 8:44
  • 1
    @DalijaPrasnikar You can leave that part out and let people do the work instead. But it would be more work, especially if you wanted to summarize all the old answers to all the old questions, but it's not a complete stopper. My experience is different, summarizing text was one of the first and in my opinion best application of LLMs. They don't have to invent new stuff, just identify fluff and duplicate information and remove it. If LLMs can do only one thing, it should be this. In the end, a higher knowledge density greatly helps in competing, either by AI or by humans, whatever works. Commented Dec 25, 2025 at 10:18
  • @DalijaPrasnikar But if this was too much, one could still go there partly, for example by identifying parts of the replies that are important and then letting people write the summary. Whatever gives a high enough quality in the end. Commented Dec 25, 2025 at 10:20
  • I have seen AI literally quoting documentation in two sentences and adding the link and then when you go and read the documentation it said exactly the opposite of what was in the summary. There is also this github.com/mdn/yari/issues/9208 Now MDN debacle was in early days and AI got better, but there is still huge difference between better and correct. Letting people write summary based on AI... yeah that will work well. You will just get human rephrased AI junk. Commented Dec 25, 2025 at 11:06
  • Using AI for finding information - providing merely links to original content - yes. Using AI for helping people writing better questions - yes. Using AI for directing people to possible duplicates - yes. But using AI for any kind of summarization of knowledge or curation - hard no. Commented Dec 25, 2025 at 11:12
  • @DalijaPrasnikar Sure there are bad examples but there are also better examples, I think. And the current state is hopefully not the end. Especially I don't see humans summarizing conversations. It's too much work for them. In my mind, it's either AI and mediocre but improvable quality of AI or nothing at all. Your opinion is very clear on this point. Mine is different. Hopefully also became clear. Commented Dec 25, 2025 at 14:37

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.