-274

We are evaluating a conceptual feature of the platform that would seek to pair askers and available experts in the community in a live session format. Beginning today, we are running an experiment to determine whether it is feasible to develop a feature that could match experts to askers based on experts’ on-platform behavior (such as watched tags). If you are included in the test, you may see this prompt on question, tag, and search result pages while using Stack Overflow:

A screenshot of the homepage of Stack Overflow showing a modal that says "A new user needs your help." It has two buttons: "Review help request" and "Not interested."

You will see this prompt when you are matched with an asker’s request for help. The prompt will be shown to you up to a maximum of five times, or until you submit a response (including “I can’t help right now”). If you do not want to see this prompt at all, you may opt out by switching off “Enable experiments” on your profile.

For transparency's sake, this feature has not been implemented on the platform, and you cannot currently be connected with a user who needs help through this modal. The modal exists only to collect data from potential helpers.

Please do not be surprised or concerned if the pairing between you and the asker seems strange or unexpected; we do not expect the pairings to be exceptionally accurate at this stage of development.

For some more details, refer to Sander's answer below.


Update for May 27th, 10:40 PM UTC (~7 hrs post-deploy): A good number of bugs have been fixed since the release of this experiment this AM. Certain data entries have been cleaned up, the matching algorithm was tweaked to improve quality, popups will no longer show if there are no matches available, the "additional comments" field is now (correctly) optional, and the modal should no longer close on accident while someone is typing. Also, "Learn more" and "Why am I seeing this?" links were added to the modal.

Update June 3rd, 2025: We have now gathered sufficient data from this experiment and it has been turned off. A sincere thanks for all of the feedback on this post.

75
  • 84
    The matching does not seem to work very well. I just got a pop-up about some sql stuff - I have never answered, asked or watched this or similar tags. That's not "not exceptionally accurate", that's just plain spam if the technology isn't at all related to past activity. I'm exclusively active in latex related tags - I'd understand to see some false positives from related tags, let's say rmarkdown, but nothing so far off. Commented May 27, 2025 at 15:29
  • 77
    What added value do you expect this feature to have compared to what subject-matter experts who want to help new users already do — i.e. watching tags, browsing the New tab, engaging in Staging Ground, ... Commented May 27, 2025 at 15:36
  • 160
    The image makes it look like I'm being asked for provide personalised support for someone. Honestly, that's not why I, and I strongly believe many others, are on Stack Overflow; the idea is to provide public answers to helpful and useful questions. If this is some kind of pseudo-consultancy experience, I hope that I get to let the user know my hourly fee, and they are asked to input their payment details, before I start provide them with a consultancy service. Commented May 27, 2025 at 15:43
  • 37
    @Slate If there is no rush, why not announce such an experiment first and then wait a couple of minutes before going live so people won't get surprised by fishy looking popups on the main site? Commented May 27, 2025 at 15:54
  • 31
    "I would encourage you to allow time for results to come in and be analyzed, as well as for public feedback to be processed, before doing anything rash." The only acceptable result would be to abandon the experiment ASAP. It's genuinely hard to find accurate words for this idea that are compatible with the Code of Conduct. It is as if the Wikimedia Foundation decided we just don't need Wikipedia at all any more because LLMs exist. I mean, obviously it works differently, but it's genuinely that destructive to the fundamental Stack Exchange model. Commented May 27, 2025 at 15:54
  • 41
    It's very disturbing to me that the company considers this. Commented May 27, 2025 at 16:01
  • 23
    "you cannot currently be connected with a user who needs help through this modal. The modal exists only to collect data from potential helpers." - so, the point is to find out who would be willing to "help" through such a model? I could see the value in collecting this information - if it were used to ban the high-rep accounts that serially post answers to blatant duplicates and negative-value debugging requests.... Commented May 27, 2025 at 16:05
  • 20
    @dan1st Indeed, but also: what about telling us what the underlying idea is first before trying to do any implementation? Commented May 27, 2025 at 16:22
  • 31
    It's fundamentally not possible to be welcoming to people who are looking for somewhere else. Engagement only works for those who are interested in the actual thing available for them to engage with. "Slow" is expressly not a concern. Commented May 27, 2025 at 16:36
  • 36
    I post answers on Stack Exchange sites to help all the people who have a similar question to the OP, not just the OP. I also help people in chat rooms, which does have a more one-on-one feel (although it also has the support of other people in the chat room), but it's still a public interaction. I am totally not interested in giving free help in private to an individual. Commented May 27, 2025 at 16:59
  • 67
    "new users do regularly bring up the issue of speed" – Then please properly educate them how the site is intended to be used: By searching for and finding an already existing Q/A pair. An existing Q/A pair gives instant solutions, much faster than any volunteer could possibly hope to help. While you're at it improve the seach, don't try to turn this into a helpdesk. Commented May 27, 2025 at 17:13
  • 22
    Why does it feel like the meta community always has to beg just to get a bit of advanced notice for these experiments. It feels like every other day we get slapped in the face by a new surprise, and when we come to meta to figure out what is going on, we get answers like 'scheduling unfortunately made it more difficult than expected'. If you can't figure out how to communicate, I really don't have too much hope for the direction of SO. Commented May 27, 2025 at 21:00
  • 43
    @Slate "I think internal readers may be starting to find it demoralizing"... no, what is demoralising is feeling that Stack Overflow is completely deaf to the active and passionate contributors to this site. Too many massively unpopular experiments and changes just get pushed through with the subtlety of a sledgehammer and providing constructive feedback is akin to shouting into the void Commented May 28, 2025 at 2:19
  • 29
    This whole thing feels like it was dictated, top-down, from people who have no concept of who their experts are, in which subjects, or how people use the site. Sadly sort of unsurprising... Commented May 28, 2025 at 13:51
  • 22
    It's not easy for community members to give calm, kind feedback - particularly when they feel the core purpose of the platform they've been using for years is being undercut by the company controlling it. if y'all are truly interested in understanding how users feel about these tests, it's imperative that you give (kind) "negative" options for them to select in your testing UX feedback - which means you must take a breath when creating a test to ensure such options exist. This may lead to more negative feedback but it will be more palatable to you because you chose the wording. Commented May 30, 2025 at 13:17

44 Answers 44

1
2
14

I’m an engineer on the team at Stack Overflow working on this test. We are experimenting with new ways to help users get unstuck, especially those who may not be able to frame a great question on their own. The goal of this experiment wasn’t to fully launch a new product or experience, but to explore one core question:

Is there a group of users who need help who would otherwise be hesitant to use the platform, and a group of users who want to help and could we match them meaningfully?

The requests you saw in this test were real submissions from a prior “fake door” test, where users were offered a form on which they could get live help. We used those examples, not to ask anyone to reply directly, but to see how helpers might feel about the content and format. We’re learning, not launching.

We’ve been doing a lot of quick assumption testing: A/B tests, user interviews, surveys, and small experiments like this one. Our goal is to learn fast and this sometimes leads to rough or mismatched experiences. This is one of several data points we’re gathering to better understand what might work.

We know many of the examples didn’t meet the high standards this community sets for quality. That feedback is entirely fair and incredibly valuable. We absolutely hear you: quality matters, context matters, and balance matters.

I am writing this to be transparent about what we are trying to learn and to get more focused feedback. We want to know: is there anything you think would make an idea like this sensible and worth exploring to you?

Thank you again for the candor, and for holding us to high standards.

27
  • 45
    You really should have given us a lot of advance notice about this. Commented May 27, 2025 at 17:01
  • 13
    My feedback on this is that the way you're trying to collect data doesn't seem quite correct. Imagine you click the button to review the request and then this pops up. Let's keep aside the fact the question there is exceptionally unclear. To begin with what does "Yes, I can help" even mean? Does it mean "I can help by answering the question", or does it mean "I can help guide in improving the question and asking for missing details"? What does "Advance beginner" even mean? Do you expect to receive any sensible data given any user would have these questions Commented May 27, 2025 at 17:08
  • 18
    This sounds like somewhat replicating Staging Ground functionality Commented May 27, 2025 at 17:28
  • 10
    I strongly feel that what's described here should have been part of the initial post; the "light on specifics" approach of the OP, I think, created the impression of a in-development feature instead of an experiment that's primarily about fact-finding, which seems to have effectively triggered Meta's fight-or-flight response in the span of an hour. The messaging was badly fumbled here, and created a lot of undue friction which could have been avoided, IMO. Commented May 27, 2025 at 17:31
  • 36
    In addition to this basically duplicating Staging Ground, being undercommunicated, and being poorly implemented to the point of confusing/unusable… It's quite bizarre to read a staff member write what you have in bold here. You are basically saying, is there a way for people who don't want to use our product/tool/platform/service to be matched up with other people who don't feel bound to our product? I mean, I'm sure there are, but why do you want to cater to that? Should the hardware store start providing lunches because many of the people shopping there also eat? Focus on your core business. Commented May 27, 2025 at 17:50
  • 18
    This should be an edit to the main post (announcement after the fact). That post can use a lot of clarification, some of which exists here. Commented May 27, 2025 at 18:08
  • 5
    I think the company could branch out to a tutoring site to complement the library site and it would be greater than the sum of its parts so I'm glad to see that idea being explored. I don't think the company's relationship with its community is at the point where you can move this fast. The popup needs to communicate that this is for data collection. It looks too much like a feature. Is there a reason why you didn't just ask the person some survey questions after they clicked on the popup and asked them their impressions directly? Commented May 27, 2025 at 18:08
  • 8
    "We used those examples, not to ask anyone to reply directly, but to see how helpers might feel about the content and format. We’re learning, not launching." - The example I looked at, was a question that COULD NOT be answered, so I suspect that will the majority of the examples you have. Questions that are so bad, poorly written and containing no diagnostic information that they cannot be answered.Which is likely the reason they have not been answered, if they were submitted as a question on SO, because they contain no diagnostic information and do not even come close to meeting our guideline Commented May 27, 2025 at 18:31
  • 9
    I am shocked that even 1 minute went into this experiment since it WILL result in 1:1 help which does NOT help the community. Stack Overflow is NOT a help desk. Any features that allow 1:1 help to be given to a user from another community user should NOT be developed. Commented May 27, 2025 at 18:33
  • 10
    "[is there] a group of users who want to help[?]" - Absolutely, but in the sense that you mean "help" here, you don't need any kind of surveys nor "fake door" deception. All you need is a database query for users with high reputation/day but zero participation on the meta site. "[C]ould we match them meaningfully?" I'm quite confident that you could, but please understand that we do not want this to happen on the main site. Commented May 27, 2025 at 18:55
  • 12
    @CodyGray "Should the hardware store start providing lunches because many of the people shopping there also eat?" To be fair, it worked for Ikea.... Commented May 27, 2025 at 18:56
  • 5
    @Slate I fully appreciate that it's not a direct responsibility of the OP here, but surely someone is coordinating these tests centrally? I've given feedback on other changes and not got a response. It's not worth writing a Meta post at this point if you're firing tests from multiple directions at the same time (how can you even test something this way?) and not actually doing anything about the feedback... which comes after the fact because nobody bothers to announce them beforehand. We just stumble into each test at this point Commented May 27, 2025 at 23:12
  • 12
    I love the double bait and switch where askers are also mislead by a "fake door". Reminds me of when a smart thermostat maker asked about a subscription fee they would introduce. "Ha ha we were only testing you, nudge nudge wink wink". Commented May 28, 2025 at 5:36
  • 4
    Given the fact that nearly everyone here has noted that the "matching" isn't even an approximation of what we've previously posted answers to, I have to wonder if you're using an LLM to drive some of this. Beyond that, if you are actually wondering Is there a group of users who need help who would otherwise be hesitant to use the platform..., maybe ascertain that FIRST. e.g. message new accounts who don't post after x interval, or who post 1 question and then offer zero responses subsequently. IS there a problem? Figure that out before trying to (clumsily) "solutionise". Commented May 28, 2025 at 6:15
  • 4
    "and a group of users who want to help" -> Yes. "and could we match them meaningfully?" -> I can hardly see how. Many of us have lost their patience from always having to ask for a minimal, reproducible example, and really just want to stay away from redundant and "do my job for me" questions. On the other hand, some users like to answer "bad" questions, but this is not appropriate on SO. How do you match the two though? IDK. This should be its own, separate thing. Commented May 28, 2025 at 7:19
13

I hit this pop-up once yesterday, and I have some thoughts.

The idea of connecting people who need help with those who can offer assistance is a fantastic concept.

Especially on Stack Overflow, where subscribing to RSS feeds is nearly impossible, being told about where I can offer assistance, whether that's writing a new question, editing a question or answers to clean them up, asking clarifying comments, or voting. There are also opportunities to surface questions on sites that I don't frequent or may not even be aware of, for cross-network discovery.

However, I have issues with what I see.

First, the "live session" format goes against the nature of the Stack Exchange model. I don't participate here to help an individual with their problem. I participate here to help share my experience and expertise in building and curating a library of widely applicable solutions to reasonably common problems or points of confusion. I do this in my free time. Although chat is a good addition to the platform, the sharing of knowledge should be done asynchronously to let me contribute based on my availability while also working in the open, where others can review my work and share their own knowledge.

Second, the bugs in this experiment were worrying. The one example I saw was a terrible fit that didn't show me anything I could answer, the reasons for not answering were terrible, and bugs with the optional comment field prevented me from submitting the form. I don't know how you can learn anything useful or meaningful with these kinds of problems.

Although the problem - connecting people with answers to people who want answers - is real and worth solving, I just see things that need serious rework from a conceptual level.

1
  • Regarding the bugs, maybe the high frequency of new experiments means that there is little time to actually prepare them and the community is also the beta tester of experimental setups. How much that contributes to a possible negative outcome, difficult to say. But it may distort the results. And for connecting people: we already have the "interesting questions for you" list. I imagine this feature to simply be a popup trying to get my attention on a single item of that list just in case I didn't look at that list but wanted to. Commented May 29, 2025 at 8:38
12

When I first saw it the popup appeared to be part of some unannounced experiment. It wasn't clear what the circumstances or consequences were, and the choices were poor, but I clicked to investigate.

It seems obvious you would not be finding out whether people would be using the feature the way they would if it were real--even if it had been real.

1
  • 6
    yea... i had no interest in participating, but the curious side of me couldn't not check it out. At minimum it makes a good screenshot to make fun of Commented May 28, 2025 at 19:25
10

The post is worded extremely confusingly. These 2 paragraphs next to each other make no sense.

Please do not be surprised or concerned if the pairing between you and the asker seems strange or unexpected; we do not expect the pairings to be exceptionally accurate at this stage of development.

For transparency's sake, this feature has not been implemented on the platform, and you cannot currently be connected with a user who needs help through this modal. The modal exists only to collect data from potential helpers.

For transparency's sake, don't write obtuse stuff like this. A staff member replied in an answer - but the content should be in the question not in one of 30 answers.

1
  • 3
    We could in principle edit the answer into the question to show how to make really high quality Q&A. Commented May 29, 2025 at 8:46
10

This is going to sound unkind, but here goes.

We are supposed to be nice to new users, and when I do interact with them I generally stick to that.

However one thing I've noticed and I am sure many have noticed as well is how many new users:

  • ask a question

  • never come back to accept answers, upvote answers, respond in any way to any edit suggestions, etc...

If you look at users sorted by reputation, a massive amount of users have only ever posted one question, never another.

As a result, I now consciously avoid answering questions from people with less than about 50 rep. In my experience doing otherwise generally has a much better than even chance of me investing time and never even getting a thank you. Not only me, but other people who answer as well (i.e. it's not just due to bad answers from me).

So I see very little value in a feature that pushes more junk into that particular funnel.

p.s. A feature that is, as others have already said, quite badly implemented as well. I just got paired with a reactjs question. I have never posted anything on that tag.

p.p.s. Yes, I would still answer a particularly good question, if one happened to come from a newbie. Just not the run of the mill dross - in terms of subject matter, not presentation - that most post.

1
  • 7
    This answer doesn't sound unkind to me at all. Commented Jun 1, 2025 at 18:21
10

The other answers have already documented the immense issues with this "experiment", from the dead-on-arrival concept to the not-even-mvp "just collecting feedback" implementation with an abysmal design that went live. I struggle to find any nicer adjectives than "clueless" for this, which leads me to the following conclusion:

With this experiment, from my PoV the decision makers at SE - as in, the people that came up with this idea and decided it should go live without gathering feedback first or even announcing it before it went live - have clearly demonstrated that they are unaware what this site was about and what made it successful, which changes are needed and useful, and how not to piss off the subject matter experts even more. Furthermore, all involved developers, CMs, product owners and whatever other roles exist at SE have also demonstrated that they either didn't notice any of the glaring issues, or did not voice any concerns about them, or were ignored if they did so and are fine with working in an environment that does not care about their feedback.

In combination with all of the other stuff happening recently and over the last years, I will take this as a clear indication that SE as a company has lost the plot to a degree that is no longer salvageable. My expectations for the future of SO were already very low due to the continuous push for "engagement" without concern about the engagement being positive, and the general disregard for quality and expert opinions that didn't match the narrative pushed by the company. Now, my expectations are that SE will actively drive enshittification until this site looks worse than yahoo answers ever did. The only positive thing is that I gained an understanding of your CEOs AI visions (or should that be hallucinations?), because at the moment it feels like replacing every single person working for SE with an LLM would be preferable.

8

Isn't this what chat was supposed to be all along? If I have lets say a PHP question, which might not be a good match for the main Q&A, and/or I'd also like to interact with someone live, I would then go to the PHP chat and ask the question there. Online domain experts who are both present in the chat and willing to help may then do so.

That way you can even get nuanced input from several people. Furthermore if person A has a question and person B answers it, then person C might chime in with some intricate detail that B didn't know about. Then it's learning by answering questions, which in turn might motivate people to answer questions.

In general I'm all for various mentor programs, minus the urgency aspect. If someone needs expert advise from me now (while I'm actually at work), then sure I guess I can provide that if you compensate me for lost salary. Rep won't do, we're talking real cash. And I don't think SO wants to go there.

Also I'm sure there's some legal aspects about abusing volunteers for things normally paid for.

1
  • 3
    I like the emphasis on multiple persons giving help. It might not actually be the most efficient solution to request help from only a single person. If only we had a place where people can propose their "problem" and work together with others, in a timely fashion, to make it a clear problem statement ready to solve. I think we even have that already. Commented May 28, 2025 at 14:18
7

Could you at least try to only give help requests which have literally any relation to what we've actually posted? I'm obviously not going to be able to help answer a question in a programming language I've never used before.

How about only giving me questions which have tags I have a bunch of answers in?

3
  • Some changes to the matching process intended to improve match quality went out a little bit ago. Most users should now see a match that is significantly more relevant to them, or not see a match at all if none is available. Commented May 27, 2025 at 22:58
  • 1
    Well, no. I got the prompt 15 minutes ago, and as a Windows/AD sysadmin (no dev, no devops) who has NEVER opined on anything Python-related, it was interesting to get a question on a Django issue. Commented May 28, 2025 at 6:22
  • 3
    This happened to me a few minutes ago. I have a score of 9 for the question tags. Single figure score. How can you possibly obtain useful data if your supposed "expert" knows (as far as you can tell) almost nothing about the question's subject matter?! And even if I wanted to help in this way, what would motivate me to ever again look at one of these when my initial experience of the matching quality is that poor? In any case, the whole thing is contrary to the things which make S.O. good in the first place, so I'm opting out of this right now. Commented May 28, 2025 at 20:00
7

I would like this help-desk feature if

  1. it's opt-in
  2. people can charge $$$ for this consultancy service they would be privately providing
7

Still failing to solve any of its ongoing problems, like too many bad questions vs. too gatekeeping moderation, too much outdated content (accepted and upvotes jQuery answers) and the fear of imminent irrelevance due to AI assistants, StackOverflow keeps launching half-baked ideas.

Maybe Staging Ground is a chance to solve some of SO's major issues.

Randomly pairing askers and community members without considering their expertise and willingness to support, emotional manipulation using deceptive UI and copy, and the lack of concept to integrate possible solutions into the popular Q&A or wiki format, clearly isn't.

6

PLEASE DON'T ENROLL ME INTO YOUR EXPERIMENTS WITHOUT MY CONSENT

If you want my opinion, ask for it.

Why are you baiting me into your experiment that I didn't consent to with emotional language such as

Someone has requested your help

When in fact there is not? One might be inclined to say this is "lying to my face". And there's a second lie when you claim that there is no match currently.

I expect this type of nonsense from scams, which I delete with a vengeance. I don't expect this from someone who thinks they can have a long term relationship with me.


Allow me to elaborate, because clearly some people take issue with my wording. Imagine your boss walks over and describes in great detail a new internal job opening. Your boss then tells you that you're a great fit for the job and asks if you'd like to apply. After you agreed enthusiastically, your boss says: "JK, I was just testing how happy you are with your current job".

There's a line between treating someone as a person who you'd like to know the opinion of, and treating someone as a bag of data waiting to be harvested. It also happens to be a big fat screaming red line with landmine warnings posted at regular intervals. You don't cross that line unless intentionally.

7
  • 4
    There is definitely an issue here with the switch being opt out (meaning that the default state is to be opted in), and users not being notified that there is a way to opt out of experiments. Also, there's a secondary issue that "experiments" is very, very broad, and so far has been used (as far as I'm aware) to mean UI-related experiments, whereas this is something completely different. It's unclear that it should fall into the same bucket. Both of these are UX problems. I think this answer could do a better job of calmly pointing that out. Commented May 27, 2025 at 17:44
  • 12
    @CodyGray But you see, the level of disbelief and upset is well-represented here. If this were a psychology experiment in a university, it'd go through ethics review boards and I'm pretty sure the first box to fail is informed consent. Commented May 27, 2025 at 17:55
  • 3
    just opt out, what's the problem. If experiments were opt-in only they'd be more or less useless at gathering the data they intend to gather. Commented May 27, 2025 at 19:02
  • 1
    While the OP wording is crap, I heartily disagree with anyone saying "opt-in is useless for data gathering". 1. This is a website, not a drug trial, so who cares if there isn't full coverage. 2. Even with drug trials, people are asked for their consent. 3. If this initiative is intended to be voluntary for members, then offering the opt-in experience should apply to the testing as well. I wasn't aware of the global opt-out feature , since I am not on this site for long stretches of time. I am very tired of tech companies doing opt-in by default, except for security! Commented May 28, 2025 at 6:28
  • 2
    @LeeM It's precisely the data of less active users such as you these experiments try to gather, with an opt-in the data will quite heavily skewed towards the actively participating crowd, not exactly valuable data as that crowd regularly shares their opinion on meta already. Commented May 28, 2025 at 7:45
  • 11
    I upvoted — "Someone has requested your help" is a lie and scammy emotional manipulation. Commented May 28, 2025 at 16:02
  • 1
    There's a mismatch between your header and most of your post. Your header complains about being enrolled in an experiment without consent (even though experiments are somewhat routine here). Most of your post complains about being misled and lied to. Those are separate concerns. (Consider this: SO could have assumed this was a good feature and implemented this partial and misleading functionality without first running an experiment. Would that make you feel better about being misled? Or is the real issue being lied to / scammed?) Commented May 31, 2025 at 5:36
6

I appear to just have received a request to help with a question that I cannot find anywhere on the platform. The tags were , , a third tag and and the question was something related to minValue and Value. I tried to search for this specific question, which was a pretty good quality question with a clear question, multiple relevant code snippets and an explanation of what they tried, but when I searched for those tags, the word "minValue" and is:question I was utterly unable to find this question anywhere in the 11 results it gave me.

I got a suspicion that I cannot confirm that this question was AI generated and never actually asked on the site. If that is true, that would be a VERY GROSS violation of the trust that users have for this platform, a violation strong enough that it may lead me to stop using all parts of the platform apart from the 2 chatrooms I have friends on...

3
  • 2
    This answer should clarify things a bit for you. Specifically see: "The requests you saw in this test were real submissions from a prior “fake door” test, where users were offered a form on which they could get live help" Commented May 28, 2025 at 13:21
  • Sorry, you have been tricked in this experiment. The request to help wasn't exactly honest. It's their new style. One way out in the future, would be to opt out of experiments. Commented May 28, 2025 at 14:21
  • I had a similar experience. The question I got was simple, but I didn't answer it because it felt like I was being tricked into helping train some AI thing. I barely post anything on this website, yet I'm trusted enough to get a direct request like this? Red flags all around. Commented Jun 1, 2025 at 15:13
6

The original post already has many critical responses that I agree with, so I just want to add a practical detail: the popup appeared when I clicked on the achievements icon and it overlapped the popup I requested (see screen-shot below); when I clicked on "Not interested" they both disappeared. I consider it a small bug.

enter image description here

-7

Who/what decides who is an expert?

1
2

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.