Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Being good at coding competitions correlates negatively with job performance (catonmat.net)
251 points by azhenley on Dec 15, 2020 | hide | past | favorite | 129 comments


I regret causing confusion here. It turns out that this correlation was true on the initial small data set, but after gathering more data, the correlation went away. So the real lesson should be: "if you gather data on a lot of low-frequency events, some of them will display a spurious correlation, about which you can make up a story."


So can you now say with some confidence that competition performance doesn't correlate with job performance? That's still kind of interesting, although less so than the original conclusion.

The explanation of the effect did seem a bit too convenient.


You and/or Google HR were also prominently quoted as saying, IIRC, that GPA, standardized test scores (and interview ratings?) had no observable correlation with job performance either. I always wrote that off as just range restriction/Berkson's paradox, but did those also go away?


This made me reread the article[0] again (if we were talking about the same one) and I don't see any mention of interview ratings and job performance.

[0] - https://www.nytimes.com/2013/06/20/business/in-head-hunting-...


Hm? It's right there at the start:

"Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship.

...One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation. Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything."


Seems counterintuitive. Naively, high GPA = High work ethic + IQ, which surely plays a role in job performance, no?


Of course, but that's where selection processes and psychometric considerations start to play havoc with naive correlations.


Interview ratings would be a surprising one, hard to imagine not overhauling the system after discovering that.


I remember reading something about Google interview ratings being uncorrelated to job performance, but that's clearly a conditional correlation based on the person being hired. For all we know, the ratings might be highly correlated with job performance up until the hire/no-hire cutoff. After all, their primary purpose is to make that binary hire/no-hire decision. Hopefully, the scoring system is hyper-optimized to be a good signal right around the hire/no-hire boundary, as the scores themselves aren't that useful for obvious hires and obvious no-hires: the scores are a decision-making tool.

In order to really get a good assessment of if the interview ratings were effective, they'd need to also hire some random unbiased sample of those who fail the interview process. There are alternative ways of slicing the data to help give insight, such as looking at only those who barely passed the interview process, or looking only at the bottom 10% of performers. However, when you're looking at such a highly biased sample (only the small-ish percentage of people hired), it's hard to say what the correlation is across the entire interview population.

At the risk of repeating myself, we don't particularly care the predictive power of the scores across the whole range, only their predictive power across those who aren't obvious no-hires and those who aren't obvious hires. That's the range where the power of the interview scores as a decision-making tool is most important.

Also, if two metrics disagree, it's not clear which one is problematic. It's possible that a poor correlation indicates that there's a problem with the performance rating system.


> I remember reading something about Google interview ratings being uncorrelated to job performance

You haven't, Googles interviews are correlated to job performance. They have data on it internally, people who work there can look. What you probably read was that brain teasers like "why are manhole covers round" doesn't correlate to job performance.


It was while I worked there that I read something briefly about them being uncorrelated, but it was probably just some popular press miss-characterization. I'd done over 100 interviews for Google SWEs, and wasn't aware of where to look up the data internally. In any case, the article I read wasn't interesting enough for me to do more digging.

I guess I should have been more critical at the time. Thanks for the clarification. Is it widely known where to look up this data internally now? I left Google over a decade ago.


When I was there I just searched it on moma and they had a paper on it showing the correlation coefficients.


Also, for any given title, ppl on HN will come up with anecdata to support it's assertion.


Thanks for clearing that up. Was job performance still positively correlated with higher gpa on that larger dataset?


Notice this comment is from Peter Norvig. ^^^^^


Ah, I remember hearing this from you in person in 2011 and have repeated it occasionally since. Thanks for the update!


This is Berkson's Paradox. Even if coding competition performance correlates positively with job performance in the general population (which it certainly does, given that most people can't code), selecting for this attribute in the hiring process leads to a negative correlation among those hired.

Great write-up by Erik Bernhardsson, CTO of Better, here: https://erikbern.com/2020/01/13/how-to-hire-smarter-than-the....


Simple analogy. There is no correlation between height and salary across NBA players.[1]

The naive conclusion would be that height has nothing to do with basketball ability. The real answer is that markets are efficient and are already correcting one important feature against other predictors. Steph Curry wouldn't even be in the NBA if had the shooting ability of Gheorghe Mureșan.

[1] https://rpubs.com/msluggett/189114


It should be mentioned that Steph Curry was drafted behind Hasheem Thabeet, Tyreke Evans, Ricky Rubio, and Jonny Flynn, among others.

Hiring is always a crapshoot. Pro sports teams spend a lot more time and money on talent evaluation than tech companies and still get it hilariously wrong all the time.


On the flip side, every NBA player who has won league MVP in the lottery era (>1985) was drafted in the first 15 picks. Steph Curry of course has won MVP twice and was drafted behind the 6 players you mention, but then he was drafted next out of every other eligible player. I'd argue that through this lens, NBA teams are really good at "hiring".

Some teams draft for current skill, others draft for absolutely maximum possible potential, others draft for some combination of both. Some teams are willing to risk a "bust" if there is the potential of ultra-elite league-best skill. And considering that no player who has reached the MVP level has fallen further than 15th, I'd say as a whole the NBA teams are doing very well.


> every NBA player who has won league MVP in the lottery era (>1985) was drafted in the first 15 picks.

I mentioned Steph Curry because the original commenter did, but in general it's very strange to focus on the MVP. That's a small sample and cherry-picking the results, only talking about the successes and overlooking all of the draft busts. There's only 1 MVP in the league every season, and some players have won it multiple times. It was won 5 times by Michael Jordan, who incidentally was drafted 3rd (behind Sam Bowie). Only 21 players have won NBA MVP during that period.

In any case, that MVP record doesn't hold in other sports. For example, NFL MVP Tom Brady was drafted behind 198 other players in 2000.

> NBA teams are really good at "hiring".

Some long-suffering Minnesota Timberwolves fans might say otherwise.


It’s only a 2-year contract. Curry’s performance for that period appears to be roughly in line with expectations of that draft position.


> Ricky Rubio

Ricky Rubio is famous for playing elite basketball since he was a scrawny 14 year old.

And Ricky Rubio has pretty much the same height as Steph Curry.


The NBA drafts more on potential than current playing skill, making it even more of a crapshoot.


Big tech companies love to hire on potential too. They often hire freshly minted college grads over experienced engineers.


I don't think that this is the conclusion I would come to. Height is not something that can be changed, therefore it cannot be used as an adjustable variable to make that market efficient. You can't train to be taller like you can train at coding competitions.

I would say that height is an advantage up to a certain point in basketball, but tall people are not especially rare. Within the market of basketball players, you can find tall people who also have other skills, sometimes you find short people (Steph Curry) who have exceptional skills.


Steph Curry is 6'3", in the 98th percentile for height. Wouldn't quite call him short. And there are only ~2,800 7-footers in the world, many of which are in the NBA. So tall players - meaning over 7 feet - are extremely rare.


Yes I meant short relative to NBA players.


Also, Steph Curry is not really short, he just looks that way next to NBA centers and forwards, but he's actually 96th percentile in height amount American men.

A better example would be Muggsy Bogues who was a full 12" shorter than Steph Curry and he could dunk.


Muggsy re:the NBA has always stood out to me as an example of the failure of a supposedly efficient market due to a massive but unintuitive oversight. It would be one thing if he had just been a middling player, but even with the MASSIVE height disparity between him and the rest of the league, he proved to be a standout player, easily in the top 50% of players historically, and probably much higher. Clearly, there's a role for short men even at the upper echelons of the sport - not just as a curiosity, but as an effective value-add above an average replacement, in part because of his lack of stature. But you almost never see NBA players below 5'9". The players are tall. The coaches are tall. Surely being tall is generally necessary for success in the sport? But then, Muggsy.

Read between the lines. If all the players are tall, and all the coaches are tall, and the game has been played for more than a half century with that assumption... who knows how to train/coach a short player?


He also could jump almost a foot and a half higher off the ground than the average NBA player, who in turn can jump almost a foot higher than the average man of the same age.

What proportion of people out there can learn to jump so high, even with extensive training/practice?


Hard to know, considering that said extensive training/practice is not as well-known as other basketball-related training, and that the practice of fielding players who would benefit from it is discouraged.

That said, a quick search of "training to dunk 5'6"" on Youtube brings up a number of videos.


I was comparing his height to his peers in the NBA.


The more accurate conclusion is you get a vastly smaller population to choose from at extreme heights which largely offsets the slight height advantage, when looking at people who make the cut.


- "The real answer is that markets are efficient"

Like it efficiently chooses all hockeyplayers with birthdays in January to March? [0]

The real answer is people hire based on their biases and organisational restrictions more than they hire on objective metrics - and we have plenty of evidence for that.

0 - https://en.m.wikipedia.org/wiki/Relative_age_effect


Why doesn't the NBA just lower the height of baskets by a foot? They would get much better athletes, so surely the games would be more entertaining and the league would make more money, no?


Aren’t the advantages of height mostly relative to the opponent, ie when trying to block him or shoot over him? I think 7ft players are already too tall to be able to shoot from an optimum angle.


Basketball is already a fast, entertaining, high-point game. Why make it faster?


Simpler analogy you cant be a good chef if you spend 12 hours a day just making bread.


Depending on how they define "winner at programming contests", this might narrow down the population to just a handful of "sport programmers". The same handful of guys win all the contests.

The statement might as well be "tourist has bad job performance". (https://en.wikipedia.org/wiki/Gennady_Korotkevich) And that isn't surprising given how much he has to train everyday to stay on top. He even turned down offers from Google/Facebook just to continue qualifying for the big annual competitions like Google Code Jam and Facebook Hacker Cup.

For a more in-depth account on how the top people train, you can check out this guy's advice on how to get two gold medals in IOI: https://codeforces.com/blog/entry/69100 and his training schedule: https://codeforces.com/blog/entry/69100?#comment-535272

Or this guy, who won IOI this year: https://www.youtube.com/watch?v=V_Cc4Yk2xe4&feature=youtu.be...


Agree that it's Berkson's Paradox.

Just because I see some stronger-worded rebuttals in this thread, I want to point out that just because this is true (it is Berkson's Paradox), that does not mean it cannot be a valuable observation. As the author pointed out, for example, it might mean that this attribute is overweighted in hiring, which is something worth considering.


That write up is excellent and I found this paragraph particularly interesting.

> An interesting paper [1] claims a negative correlation between sales performance and management performance for sales people promoted into managers. The conclusion is that “firms prioritize current job performance in promotion decisions at the expense of other observable characteristics that better predict managerial performance”. While this paper isn't about hiring, it's the exact same theory here: the x-axis would be something like “expected future management ability” and the y-axis “sales performance”.

[1] https://www.nber.org/papers/w24343.pdf


He’s not the CTO of Spotify, he worked at Spotify and he’s the CEO of a company called Better.


Thanks, fixed!


Exactly, I'm surprised that Peter Norvig who literally wrote the text book on AI didn't think of this and instead came up with this other explanation.


Yup exactly.


IOI gold medalist here.

Most coding competitions tend to assess a specific set of skills: puzzle solving ability; algorithmic knowledge; and being able to code fast. All of these skills are useful in "real life" programming.

However, since the code you write will be thrown away post-competition, your focus is on churning out solutions that "just work" — proper engineering practices and maintainability isn't relevant. All your code needs to do is to generate the correct outputs.

Does competing turn you into a strong coder? Absolutely. Does this equate to being a strong engineer? Nope. Software engineering isn't just about coding fast.

This is anecdotal, but from what I've seen (as a trainer and friend of several IOI medalists): some of them appreciate that coding != engineering and proceed to develop their engineering skills. Others don't and remain stuck at the "I'll come up with a fast solution" mindset.

Whether one or the other happens very much depends on the person, plus, I believe, whom they end up working with. After all, we've all heard about the "10x" programmer – and when your colleague or subordinate appears to code at 10x speed, you just might think twice about whether you're qualified to advise or guide them. That results in their keeping any bad habits they might have.


Coding is social, because most code needs to be maintained by more than one person (over it’s lifetime). And if the code is so brilliant that only few people can understand it, it becomes a maintenance headache over the longer haul.

The same is arguably true for many professions and walks of life.

If you can make your work serviceable by more people, it becomes less expensive to do so. And in many (not all) cases, that’s a superior life-time value of your work.


> proceed to develop their engineering skills

Is there a way to focus practice on that?


Voluntarily feature creep. Instead of writing simpler one-off projects, try to keep plugging at the same project for a longer time and expand the product in ways you couldn't have guessed at the beginning. You have to learn how to write maintainable code and plan ahead.


Write a lot of programs. Reflect on how they can be better. Repeat.

Program with people. Learn from them. Exchange ideas. Get code reviews and give them.

Look at program source code on github for inspiration.

And just keep writing programs.


It's very hard to assess automatically (and therefore cheaply) since "engineering skill" is largely qualitative rather than quantitative.


Having participated in competitive programming and comparing it to development work it feels to me comparing chess tactic puzzles to classical chess. If you're a good classical player you're probably reasonably good at puzzles, but the opposite is not necessarily true.

Competitive coding, despite superficially involving typing code into an editor, has almost nothing to do with working on large pieces of software. It's a lot of rote memorisation, learning algorithms, matching them onto very particular problems, and so on, it's more of a sport. Just like playing too much bullet chess can be bad for your classical chess I can honestly see how it gets into the way of collaborative work.


That's not an ideal comparison. Chess is 90% tactics, whereas development job isn't 90% competitive programming.


It's actually more subtle. Yes, chess is very tactical but the way you approach tactics in a puzzle is very different from how you mentally approach tactics in a game of chess.

If you already know that there is a tactic in the position your entire frame of reference changes. Which is actually why puzzle composition is treated very differently from actually playing, and a lot of famous composers are not particularly strong players.

This is why I feel it compares well to coding competitions. It looks so similar, but the mindset is very different. And only looking at tactics, just like only looking at coding as a game problem is I think why it may damage your performance at work.


Chess puzzles just never have realistic positions that you'd encounter in a real game so I can see why it wouldn't help you in games.


The terminology associated with chess challenges [to use a neutral term] is unfortunate.

"Chess problem" is a term of art that refers to an artificial composed position with a unique solution that is constructed to both be a challenge to the solver and have aesthetic value. They often have constraints on the solution such as that White must deliver checkmate in two moves (three ply). This is what I assume you're referring to.

A position from an actual game (or that easily could have been) that demonstrates a tactic (or combination of them) is generally known as a "chess puzzle", largely because the term "chess problem" was already squatted on.

Somewhere in between the two is the "study", which is a constructed position, less artificial than a chess problem but still very carefully made to have a unique solution that walks a tightrope and generally requires absolutely exact calculation rather than working by general tactical principles.


chesstempo's puzzles all come from grandmaster games, I believe, as well as the lichess puzzles being from lichess games


Lichess puzzles are taken from real games and tactics come up literally in every single chess game.


Ah cool, I'm playing some now, pretty nifty! The puzzles I knew back in the 90s were very contrived.


> Chess is 90% tactics

From where people assert so confidently such nonsense? Chess is 90% tactics at the under 1800 Elo level or so. At the 2700+ level? No way.


"Chess is 90% tactics" is a pretty subjective statement


I didn't come up with the quote appears in Chess circles a lot. I think a chess master said it.



That seems to ring true.

At lower levels like where I'm at, players are prone to mistakes and blunders, so having a good eye for tactics allows you to take advantage of those moments in the game as well as prevent yourself from getting into a bad situation.

But at elite levels, tactics have less importance (as he says in the video he estimates it drops to 50%) as every player at that level is extremely solid.


I was very good at competitions, but terrible at rote memorization, including memorizing algorithms and matching them to particular problems. I'd just create the algorithms on the fly. E.g. I was presented with a maze solving problem, never had read about maze solving before, and just created my own version of it.

It's easy to make generalizations that minimize or downplay some of these things. But it's no more knowledge then the original study on too little data was.


Berkson's, right? A perfect interview process would result in a population where none of the people who are hired as a result of it have any attributes that could be correlated with job performance. i.e. all the information has been 'used up' by the selection filter. If you have a correlation, then you can improve the selection filter, so it can't be perfect.

This can have interesting outcomes. For instance, when Triplebyte published their blog post about which environments get the most hires⁰, it revealed the areas they haven't yet entirely accounted for in their quest to increase matching performance.

0: https://triplebyte.com/blog/technical-interview-performance-...


I don't follow the reasoning. Even if you have a reliable prediction of performance, what prevents you from hiring some candidates who are exemplary and some who are just well-qualified? Or are we assuming that the best candidates would be given a higher-tier job for which they just meet the requirements?


It's a spherical cow conversation. The cows you hire can be placed in a smooth manifold of job difficulty and you have a predictable prediction of the job performance distribution.


Got it. And I guess job difficulty is a scalar quantity in this model.


Hard to call that a "perfect interview process", because from individual candidate point of view, some with unusual characteristics could be unfairly disadvantaged (and other unfairly advantaged), while reaching your overall "neutral" distribution at the end. Short of being an omniscient interview process, I'm not sure this can absolutely be avoided, so in practice you have to be very careful with the kind of correlation presented here. Even if that ends up not being Berkson, but a property of all programmers.


In the linked video, Peter Norvig attaches a big caveat which is that this analysis was done on people who were hired and therefore met Google's various other hiring criteria. So the context isn't random people off the street, but rather that among people who are already (presumably) competent at what they do, winning coding competitions correlates inversely with job performance for some reason.


So I would even not just say "the context isn't random people off the street", but that "the context isn't random employed/employable programmers". It's people working at Google. Deducing anything from that with high certainty is hard. Not only it could be inversed in other companies, but even if not it is hard to apply statistics to individual candidates.


I wouldn't be surprised if interview performance also correlated negatively with job performance, but I guess we'll never find out.


I think you'll find a good sample of people who did poorly on their interviews but were still hired due to stellar references, corporate politics or nepotism.


That's possible, extremely high performance at interview may just mean that person trained really well for the interview process not for the job.


But is the opposite true, and can you prove it by binary tree optimization?


How much time do I have?


Aha, now I can feel good about my late but completely unit tested Advent of Code solutions.


project structure + unit tests > coding fast + clever solutions


Just like writing code on a whiteboard with people watching, it's almost like activities that have nothing to do with what you'd do on the actual job actually aren't a great predictor of job performance. haha.


Anecdotal, but I have noticed a similar correlation between achievements in Capture The Flag (CTF) competitions and performance in the cybersecurity industry. As some people have pointed out, this effect appears when observing just people within the field, not the general population.

Disclaimer: I don't have any data to back this up.


I've seen the opposite. Many of the very best security researchers are also top performers in CTFs (to give one example, Orange Tsai). It's hard for me to see how your correlation works unless "cybersecurity industry" is defined extremely narrowly.


Correlations don't have to "work" as they don't imply causality. Orange Tsai is indeed quite talented at both competitions and research (but it would prove as little as my own anecdotes).

If I'm biased, it might be because I defined the "cybersecurity industry" too broadly, not too narrowly: One can acquire certain skills from competing in CTFs/competitions, e.g. hard skills related to reverse engineering and vulnerability research... but I believe in most cases further skills are additionally needed to succeed in the industry, e.g. software engineering, and softer skills such as communication, planning and negotiation (useful for other jobs as well).

Overly optimizing skills to win CTFs while neglecting other matters can be harmful, like badly assigned character points in an RPG. :-)


Given that companies regularly struggle to evaluate performance well or even accurately, an equally valid conclusion could be “Coding and playing politics to be well-ranked by peers are negatively correlated”. Now the question becomes is coding or playing politics what you value more organizationally.


I can already see folks commenting out that this is the reason we should stop whiteboard interviews. But let's take a step back.

Whiteboard weeds out candidates that cannot code period. I wouldn't believe the legend of the "senior engineer" who couldn't FizzBuzz until I met him. I've personally conducted interview where a simply word count function (take a string, count the words) couldn't be implemented in 30 minutes. And the resume listed proficiency in several programming languages.

Now this study tells us that competitive programmers aren't that great among the candidates that were hired, not among the pool of candidates. That's very different.


I have very little frame of reference for other people's technical skills when they enter the profession since I'm entirely self-taught and still move dirt for a living. Thank you for sharing that story, it drove me to try my hand, it took less than 15 minutes.


Cool headline but not really data or a serious article. Obviously it’s a mix of factors, but some of the best I know are also very good at coding interviews, contests, etc.


In what way is this not data? Are you saying that because they didn't show you the data that it doesn't count?

Alternatively, is it possible that this is an instance where your local experience doesn't generalize.


Imagine replying to a post based on a huge dataset, saying it's "not really data", then countering with an opinion based on a personal anecdote.


Now imagine that the originator of the aforementioned "not really data" came to this very thread to admit that the proposed correlation disappeared once the dataset became more "huge" (c.f. Norvig's actual comment)


My takeaway from this was that you can't have too many things you're judging a candidate on. If you over emphasise one trait you're going to screw yourself over in finding another. I was also happy to see that ignoring things that many companies over-value is a good strategy if you think you know what's over valued.

My own 3 hiring criteria are still not talked about much and were very effective last I used them. Now I understand why a little better.


> Peter added that programming contest winners are used to cranking solutions out fast and that you performed better at the job if you were more reflective and went slowly and made sure things were right.

Alternative explanation: good job performance, especially in a big companies where such studies can be conducted, requires some consideration for corporate politics that correlates negatively with interest in programming?


ITT: Lots of folks misunderstanding correlation. =)


I wonder, is it true for DataScience too?

Does being good at kaggke competions negatively correlates with actual job performance?


Kaggle comps don't, to my knowledge, involve tasks like: - Convincing someone to give you data.

- Convincing them to spend great effort into changing their data collection and labelling practices.

- Explaining why a particular technique was used and why it is correct.

- Explaining why they can't expect magic from 'big data'.

- Making models that are robust and easily maintained, vs fragile spaghetti.

But I don't think being good at kaggle implies being bad at data science soft skills. Technical skills are probably weakly correlated -- that's my prior, it would be good to see a study.

I did find a paper examining the performance of TopCoder participants: https://doi.org/10.1007/s10664-019-09755-0


Coding competitions don't require any of the soft skills that a real job does


Or many of the hard skills.


The first developer I ever had to let go was good at coding competitions. He was a gambling addict and I feel like the coding competition thing had similar motivations... Get rich quick somehow.


Focus on coding competitions has negatively impacted my software engineering journey.

The only benefit is you are better at interviewing and they are fun to do but the cons far outweigh the pros.


Could you define what you mean by focus? Do you mean you were training, practicing and down ng many events?


In my hiring for software engineers and architects I always consider coding/design skill to be merely a necessary condition - something you just briefly test for, mostly to detect b*llshit on the resume.

What I found is far more important are social skills. Can the person work as member of a team? How do they respond to feedback? Or when something is hard? When they get stuck? How do they communicate a design? Rally a team around them? Deal with disputes? React to changing situations? Can they take the initiative or need to be told what to do? Etc, etc.

Together with actual coding/design skills - and with proper management - these are the necessary conditions. All my humble opinion, of course.


How do you test for those social skills?


IMHO, part of the technical questions is to evaluate reactions. Perhaps I disagree with a solution. Or I do not give all the information needed to solve a problem so the person has to ask. Or I give hints on the way and see how receptive the person is. And so on.

After that it's just part of the conversation. Probing the resume. And asking some of these question flat out. I also discuss real practical problems/challenges we have at work, most of which require a team to work together.

We also (used to pre-covid) take the person for lunch and decidedly talk not about work at that time.

Of course it's different between an engineer fresh from college and a seasoned software engineer or architect.

In the end, I think interviewing is itself a skill one needs to train. Just having a person solve some brain-teasers doesn't cut it. I have _way_ more teams, co-founders, etc, fail due to social issue rather than technical skills.

And I am by no means presuming that I'm good at it, just that these are things I look for.

An important part I forgot to mention in the first post that the person is interviewing the company as much as we interview the person. So we leave a significant amount of time for asking questions about the company... and try to make a good impression.

(Sorry for the essay.)


There are actually a fair amount of scientific studies on good interview practices. The book "Becoming the Evidence-Based Manager" gives a nice summary of many of them.

I'll spoil a bit of the book though. Free form "coffee interviews" or lunch interviews are the worst possible types. They introduce a ton of bias into the process that's not related to job performance.


Thanks for the reference, added that to my Tsundoku.


Dn't be sorry, I'm glad you explained that. I've been involved in some hiring and my workplace isn't equipped at all to deal with those questions properly...


I would have more trust in this if a) it reflected on the style of interview they gave and/or b) they released data.


being excellent at algorithms and mathematics is the base to solve unsolved computer science problems. though most companies just need to implement projects, top ones will try to push the boundaries. all projects and companies will eventually die someday, algorithms and mathematics will stay.


Knowledge does not equal wisdom. It's as simple as that.


I hate this kind of comment where people take synonyms and say they are the different things. Knowledge and wisdom are the same thing despite whatever bullshit definitions people apply to try to make them seem different.

Perhaps what you mean is knowledge in artificial crafted problems is not the same as knowledge in practical tasks that you would perform in a real job on a real world application.


They're not even close to being the same thing. The point here is that wisdom is more akin to judgment than mere possession of facts. For example, sufficient wisdom may give one the humility to reflect on the completeness of their understanding before submitting an inflammatory reply.


Knowledge is knowing what happened. Wisdom is realizing what is likely going to happen.


The dictionary definition does not make any reference to this. At most you could say the difference is knowledge can be purely theoretical while wisdom is knowledge + practical experience.

But nothing about a coding challenge is purely theoretical. Its mostly experience with the specific set of problems that come up in these challenges which is a different set of problems that come up in a commercial situation.


Knowledge is knowing what the dictionary says. Wisdom is knowing what that definition implies :)

Sorry, just ribbing you a bit.

I believe the words do have subtle but distinct differences in meaning.

Agreed on coding challenges being a different set of problems then most commercial applications.


Well yeah. Coding competitions are like an F1 race. High speed, high skill, high stress.

Building products and services is like driving an 18-wheeler cross country. Needs patience, dedication, long drawn effort, some degree of talent, team work, guidance, and what not.


I kinda like this analogy.

Coding competitions are also like F1 races in that the problem space usually is very well-defined and very narrow. You'll only run in this track, you know pretty much exactly who your opponents are, you know exactly how much budget you have. The only things that are unpredictable are the weather on the day of the race, the accidents that may happen, a team member getting sick.

Building actual products and services is like driving an 18-wheeler in that the road is much more open and you don't know what other vehicles and/or reckless drivers you'll come across, the weather variation over a very long distance, traffic, road works and detours. The driver also needs to stay awake for much, much longer lengths of time.


> and made sure things were right.

Every coding competition I’ve participated in required correctness as a first criteria for basic acceptance, and then speed as a secondary scoring criteria.


Did you loose significant points for incorrect solutions? Otherwise that still discourages verifying over guessing.


For Advent of Code, you lose a minute of time for every wrong answer. Also in most how-to-leaderboard guides it's heavily noted that making mistakes (ie. bugs) is going to cost you the most times (anecdotally, I'd agree with this). Those who leaderboard have to write bug free code.


And just as a reference the difference between two performers is usually between 1 and 15s so one minute is a lot. On current days, if you are fast 1 minute may still get you leaderboard as the delta between number 1 and 100 is 10 min for day 1 it was 34s.


[flagged]


Maybe it was a proxy for IQ back when leetcode type interview questions (and websites to help you prepare for them) were less popular. Now that they are so common, its just another specific skill that candidates can grind, so it measures preparation as much/more than IQ.


The first two statements you make are incorrect. Well, I'll give you that you might find a big company who thinks of it this way (but they are confused).

There is a lot of data at FAANGS about hiring, but it will be structurally very hard to infer anything about broader activity from it, even if you had all the data.


Doubt it. Big companies using coding problems as a proxy for knowledge of coding, which turns out to not be a strong predictor of success at work and in life. There is a lot of data internally at FAANGs that being good at coding competitions correlates negatively with job performance.


:)


Being good at coding problems, even leetcode questions, is a different thing than being good at coding competitions. This isn't a "dunk".


You can't cheat in coding competitions. In a real life job, however, you can.


Can you define cheating? Does looking on Stack Overflow for solutions to real life job problems count as cheating?

(That was a partly rhetorical question. My point is that what is considered cheating in coding competitions is pretty much normal in real life jobs. Somebody who knows to look for solutions on Stack Overflow, I call them "resourceful", not cheater. Being resourceful is really important in real life jobs.)


The winners of the Kaggle/Facebook million dollar contest were disqualified for cheating.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact