Skip to main content
added 25 characters in body
Source Link
Daniel B
  • 541
  • 3
  • 6

There is no real foolproof way to guarantee that the "official client" is running; any such mechanism will rely on the validating code communicating back some kind of "secret" to the server, which can be reverse engineered, given enough time. This is basically what happens when an anti-hacking software tells the server that the client is OK.

Edit: to elaborate a bit on the above, consider the code that's validating the client side. It has two very difficult jobs: checking that the original code is being used (and nothing else is present which can interfere with the original code dynamically / at runtime(!)), and communicating this result back to the server, in such a way that this communication cannot be faked. While the first part is insanely difficult, the second part is downright impossible.

If you can update both the client and the server on a regular basis, then you can switch the secret out on a regular basis, with the hope of making it difficult for the crackers to keep up. In all likelihood however, unless you are changing the way the secret is encoded / implemented, it can be cracked very quickly again. So basically, it's an arms-race between you and whoever wants to crack it - who has more time and money to throw at the problem.

Having accepted that part, is there anything else we can do? In a perfect world, with infinite computing power and bandwidth, you could simply continuously transfer state between the client and the server, and have the server run a perfect simulation of what's happening on the client. This model could then be used to validate the actions that the client is claiming to be making. This will not detect whether a human or a bot is playing, but it will be able to validate whether the client is claiming a shot happening through a wall, or some other inconceivable action.

Having enough data on the server is also a first step in detecting irregular behaviour - aiming that's too quick for humans perhaps, etc. Obviously the perfect simulation situation is generally not feasible, but some kind of scaled-down, estimated model can be used in many situations.

There is no real foolproof way to guarantee that the "official client" is running; any such mechanism will rely on the validating code communicating back some kind of "secret" to the server, which can be reverse engineered, given enough time. This is basically what happens when an anti-hacking software tells the server that the client is OK.

Edit: to elaborate a bit on the above, consider the code that's validating the client side. It has two very difficult jobs: checking that the original code is being used (and nothing else is present which can interfere with the original code(!)), and communicating this result back to the server, in such a way that this communication cannot be faked. While the first part is insanely difficult, the second part is downright impossible.

If you can update both the client and the server on a regular basis, then you can switch the secret out on a regular basis, with the hope of making it difficult for the crackers to keep up. In all likelihood however, unless you are changing the way the secret is encoded / implemented, it can be cracked very quickly again. So basically, it's an arms-race between you and whoever wants to crack it - who has more time and money to throw at the problem.

Having accepted that part, is there anything else we can do? In a perfect world, with infinite computing power and bandwidth, you could simply continuously transfer state between the client and the server, and have the server run a perfect simulation of what's happening on the client. This model could then be used to validate the actions that the client is claiming to be making. This will not detect whether a human or a bot is playing, but it will be able to validate whether the client is claiming a shot happening through a wall, or some other inconceivable action.

Having enough data on the server is also a first step in detecting irregular behaviour - aiming that's too quick for humans perhaps, etc. Obviously the perfect simulation situation is generally not feasible, but some kind of scaled-down, estimated model can be used in many situations.

There is no real foolproof way to guarantee that the "official client" is running; any such mechanism will rely on the validating code communicating back some kind of "secret" to the server, which can be reverse engineered, given enough time. This is basically what happens when an anti-hacking software tells the server that the client is OK.

Edit: to elaborate a bit on the above, consider the code that's validating the client side. It has two very difficult jobs: checking that the original code is being used (and nothing else is present which can interfere with the original code dynamically / at runtime(!)), and communicating this result back to the server, in such a way that this communication cannot be faked. While the first part is insanely difficult, the second part is downright impossible.

If you can update both the client and the server on a regular basis, then you can switch the secret out on a regular basis, with the hope of making it difficult for the crackers to keep up. In all likelihood however, unless you are changing the way the secret is encoded / implemented, it can be cracked very quickly again. So basically, it's an arms-race between you and whoever wants to crack it - who has more time and money to throw at the problem.

Having accepted that part, is there anything else we can do? In a perfect world, with infinite computing power and bandwidth, you could simply continuously transfer state between the client and the server, and have the server run a perfect simulation of what's happening on the client. This model could then be used to validate the actions that the client is claiming to be making. This will not detect whether a human or a bot is playing, but it will be able to validate whether the client is claiming a shot happening through a wall, or some other inconceivable action.

Having enough data on the server is also a first step in detecting irregular behaviour - aiming that's too quick for humans perhaps, etc. Obviously the perfect simulation situation is generally not feasible, but some kind of scaled-down, estimated model can be used in many situations.

added 442 characters in body
Source Link
Daniel B
  • 541
  • 3
  • 6

There is no real foolproof way to guarantee that the "official client" is running; any such mechanism will rely on the validating code communicating back some kind of "secret" to the server, which can be reverse engineered, given enough time. This is basically what happens when an anti-hacking software tells the server that the client is OK.

Edit: to elaborate a bit on the above, consider the code that's validating the client side. It has two very difficult jobs: checking that the original code is being used (and nothing else is present which can interfere with the original code(!)), and communicating this result back to the server, in such a way that this communication cannot be faked. While the first part is insanely difficult, the second part is downright impossible.

If you can update both the client and the server on a regular basis, then you can switch the secret out on a regular basis, with the hope of making it difficult for the crackers to keep up. In all likelihood however, unless you are changing the way the secret is encoded / implemented, it can be cracked very quickly again. So basically, it's an arms-race between you and whoever wants to crack it - who has more time and money to throw at the problem.

Having accepted that part, is there anything else we can do? In a perfect world, with infinite computing power and bandwidth, you could simply continuously transfer state between the client and the server, and have the server run a perfect simulation of what's happening on the client. This model could then be used to validate the actions that the client is claiming to be making. This will not detect whether a human or a bot is playing, but it will be able to validate whether the client is claiming a shot happening through a wall, or some other inconceivable action.

Having enough data on the server is also a first step in detecting irregular behaviour - aiming that's too quick for humans perhaps, etc. Obviously the perfect simulation situation is generally not feasible, but some kind of scaled-down, estimated model can be used in many situations.

There is no real foolproof way to guarantee that the "official client" is running; any such mechanism will rely on the validating code communicating back some kind of "secret" to the server, which can be reverse engineered, given enough time. This is basically what happens when an anti-hacking software tells the server that the client is OK.

If you can update both the client and the server on a regular basis, then you can switch the secret out on a regular basis, with the hope of making it difficult for the crackers to keep up. In all likelihood however, unless you are changing the way the secret is encoded / implemented, it can be cracked very quickly again. So basically, it's an arms-race between you and whoever wants to crack it - who has more time and money to throw at the problem.

Having accepted that part, is there anything else we can do? In a perfect world, with infinite computing power and bandwidth, you could simply continuously transfer state between the client and the server, and have the server run a perfect simulation of what's happening on the client. This model could then be used to validate the actions that the client is claiming to be making. This will not detect whether a human or a bot is playing, but it will be able to validate whether the client is claiming a shot happening through a wall, or some other inconceivable action.

Having enough data on the server is also a first step in detecting irregular behaviour - aiming that's too quick for humans perhaps, etc. Obviously the perfect simulation situation is generally not feasible, but some kind of scaled-down, estimated model can be used in many situations.

There is no real foolproof way to guarantee that the "official client" is running; any such mechanism will rely on the validating code communicating back some kind of "secret" to the server, which can be reverse engineered, given enough time. This is basically what happens when an anti-hacking software tells the server that the client is OK.

Edit: to elaborate a bit on the above, consider the code that's validating the client side. It has two very difficult jobs: checking that the original code is being used (and nothing else is present which can interfere with the original code(!)), and communicating this result back to the server, in such a way that this communication cannot be faked. While the first part is insanely difficult, the second part is downright impossible.

If you can update both the client and the server on a regular basis, then you can switch the secret out on a regular basis, with the hope of making it difficult for the crackers to keep up. In all likelihood however, unless you are changing the way the secret is encoded / implemented, it can be cracked very quickly again. So basically, it's an arms-race between you and whoever wants to crack it - who has more time and money to throw at the problem.

Having accepted that part, is there anything else we can do? In a perfect world, with infinite computing power and bandwidth, you could simply continuously transfer state between the client and the server, and have the server run a perfect simulation of what's happening on the client. This model could then be used to validate the actions that the client is claiming to be making. This will not detect whether a human or a bot is playing, but it will be able to validate whether the client is claiming a shot happening through a wall, or some other inconceivable action.

Having enough data on the server is also a first step in detecting irregular behaviour - aiming that's too quick for humans perhaps, etc. Obviously the perfect simulation situation is generally not feasible, but some kind of scaled-down, estimated model can be used in many situations.

Source Link
Daniel B
  • 541
  • 3
  • 6

There is no real foolproof way to guarantee that the "official client" is running; any such mechanism will rely on the validating code communicating back some kind of "secret" to the server, which can be reverse engineered, given enough time. This is basically what happens when an anti-hacking software tells the server that the client is OK.

If you can update both the client and the server on a regular basis, then you can switch the secret out on a regular basis, with the hope of making it difficult for the crackers to keep up. In all likelihood however, unless you are changing the way the secret is encoded / implemented, it can be cracked very quickly again. So basically, it's an arms-race between you and whoever wants to crack it - who has more time and money to throw at the problem.

Having accepted that part, is there anything else we can do? In a perfect world, with infinite computing power and bandwidth, you could simply continuously transfer state between the client and the server, and have the server run a perfect simulation of what's happening on the client. This model could then be used to validate the actions that the client is claiming to be making. This will not detect whether a human or a bot is playing, but it will be able to validate whether the client is claiming a shot happening through a wall, or some other inconceivable action.

Having enough data on the server is also a first step in detecting irregular behaviour - aiming that's too quick for humans perhaps, etc. Obviously the perfect simulation situation is generally not feasible, but some kind of scaled-down, estimated model can be used in many situations.