Skip to main content
typos, link
Source Link
Morwenn
  • 20.2k
  • 3
  • 69
  • 132

Generally speaking, I have to admit that I don't understand everything the code is doing and that I could use a little more time in ordeerorder to understand it. However, I hope I can still give you some hopefully useful pieces of advice:

  • You could use a little more C++ in your code, which could be considered a bit too C-ish. What comes to my mind is BITS_PER_WORD which can actually be obtained with std::numeric_limits:

      enum { BITS_PER_WORD = std::numeric_limits<word_t>::digits };
    

    The same goes for several other constants such as UINT32_MAX which could be std::numeric_limits<uint32_t>::max(). I have to admit that it makes the code longer too, but it may be easier to change the integer type with a basic find-and-replace if you ever need to.

  • You could also use the C++ algorithms instead of the C library. For example, replace std::memset by std::fill_n, which is likely to be optimized back into an std::memset anyway while being clearer:

      std::fill_n(target_segment, std::size_t((max_bit + CHAR_BIT) / CHAR_BIT), 0);
    
  • It is likely that it has already been optimized by the compiler but since you compute both index / BITS_PER_WORD and index % BITS_PER_WORD, you may as well make even more sure that these two values are computed together by using std::divstd::div. It also applies to bits_to_sieve % segment_size and bits_to_sieve / segment_size somewhere else in the program.

  • If you have access to a C++11 compiler, don't hesitate to declare some variables constepxrconstexpr, like SIEVE_BITS which can totally be computed at compile time.

Generally speaking, I have to admit that I don't understand everything the code is doing and that I could use a little more time in ordeer to understand it. However, I hope I can still give you some useful pieces of advice:

  • You could use a little more C++ in your code, which could be considered a bit too C-ish. What comes to my mind is BITS_PER_WORD which can actually be obtained with std::numeric_limits:

      enum { BITS_PER_WORD = std::numeric_limits<word_t>::digits };
    

    The same goes for several other constants such as UINT32_MAX which could be std::numeric_limits<uint32_t>::max(). I have to admit that it makes the code longer too, but it may be easier to change the integer type with a basic find-and-replace if you ever need to.

  • You could also use the C++ algorithms instead of the C library. For example, replace std::memset by std::fill_n, which is likely to be optimized back into an std::memset anyway while being clearer:

      std::fill_n(target_segment, std::size_t((max_bit + CHAR_BIT) / CHAR_BIT), 0);
    
  • It is likely that it has already been optimized by the compiler but since you compute both index / BITS_PER_WORD and index % BITS_PER_WORD, you may as well make even more sure that these two values are computed together by using std::div. It also applies to bits_to_sieve % segment_size and bits_to_sieve / segment_size somewhere else in the program.

  • If you have access to a C++11 compiler, don't hesitate to declare some variables constepxr, like SIEVE_BITS which can totally be computed at compile time.

Generally speaking, I have to admit that I don't understand everything the code is doing and that I could use a little more time in order to understand it. However, I hope I can still give you some hopefully useful pieces of advice:

  • You could use a little more C++ in your code, which could be considered a bit too C-ish. What comes to my mind is BITS_PER_WORD which can actually be obtained with std::numeric_limits:

      enum { BITS_PER_WORD = std::numeric_limits<word_t>::digits };
    

    The same goes for several other constants such as UINT32_MAX which could be std::numeric_limits<uint32_t>::max(). I have to admit that it makes the code longer too, but it may be easier to change the integer type with a basic find-and-replace if you ever need to.

  • You could also use the C++ algorithms instead of the C library. For example, replace std::memset by std::fill_n, which is likely to be optimized back into an std::memset anyway while being clearer:

      std::fill_n(target_segment, std::size_t((max_bit + CHAR_BIT) / CHAR_BIT), 0);
    
  • It is likely that it has already been optimized by the compiler but since you compute both index / BITS_PER_WORD and index % BITS_PER_WORD, you may as well make even more sure that these two values are computed together by using std::div. It also applies to bits_to_sieve % segment_size and bits_to_sieve / segment_size somewhere else in the program.

  • If you have access to a C++11 compiler, don't hesitate to declare some variables constexpr, like SIEVE_BITS which can totally be computed at compile time.

typo
Source Link
Caridorc
  • 28.2k
  • 7
  • 55
  • 138

Generally speaking, I have to admit that I don't understand everything the code is doing and that I could use a little more time in ordeer to understand it. However, I hope I can still give you some useful pieces of advice:

  • You could use a little more C++ in your code, which could be considered a bit too C-ish. What comes to my mind is BITS_PER_WORD which can actually be obtained with std::numeric_limits:

      enum { BITS_PER_WORD = std::numeric_limits<word_t>::digits };
    

    The same goes for several other constants such as UINT32_MAX which could be std::numeric_limits<uint32_t>::max(). I have to admit that it makes the code longer too, but it may be easier to change the integer type with a basic find-and-replace if you ever need to.

  • You could also use the C++ algorithms instead of the C library. For example, replace std::memset by std::fill_n, which is likely to be optimized back into an std::memset anyway while being clearer:

      std::fill_n(target_segment, std::size_t((max_bit + CHAR_BIT) / CHAR_BIT), 0);
    
  • It is likely that it has already been optimized by the compiler but since you compute both index / BITS_PER_WORD and index % BITS_PER_WORD, you may as well make even more sure that these two values are computed together by using std::div. It also applies to bits_to_sieve % segment_size and bits_to_sieve / segment_size somewhere else in the program.

  • If you have access to a C++11 compiler, don't hesitate to declare some variables constepxr, like SIEVE_BITS which can toallytotally be computed at compile time.

Generally speaking, I have to admit that I don't understand everything the code is doing and that I could use a little more time in ordeer to understand it. However, I hope I can still give you some useful pieces of advice:

  • You could use a little more C++ in your code, which could be considered a bit too C-ish. What comes to my mind is BITS_PER_WORD which can actually be obtained with std::numeric_limits:

      enum { BITS_PER_WORD = std::numeric_limits<word_t>::digits };
    

    The same goes for several other constants such as UINT32_MAX which could be std::numeric_limits<uint32_t>::max(). I have to admit that it makes the code longer too, but it may be easier to change the integer type with a basic find-and-replace if you ever need to.

  • You could also use the C++ algorithms instead of the C library. For example, replace std::memset by std::fill_n, which is likely to be optimized back into an std::memset anyway while being clearer:

      std::fill_n(target_segment, std::size_t((max_bit + CHAR_BIT) / CHAR_BIT), 0);
    
  • It is likely that it has already been optimized by the compiler but since you compute both index / BITS_PER_WORD and index % BITS_PER_WORD, you may as well make even more sure that these two values are computed together by using std::div. It also applies to bits_to_sieve % segment_size and bits_to_sieve / segment_size somewhere else in the program.

  • If you have access to a C++11 compiler, don't hesitate to declare some variables constepxr, like SIEVE_BITS which can toally be computed at compile time.

Generally speaking, I have to admit that I don't understand everything the code is doing and that I could use a little more time in ordeer to understand it. However, I hope I can still give you some useful pieces of advice:

  • You could use a little more C++ in your code, which could be considered a bit too C-ish. What comes to my mind is BITS_PER_WORD which can actually be obtained with std::numeric_limits:

      enum { BITS_PER_WORD = std::numeric_limits<word_t>::digits };
    

    The same goes for several other constants such as UINT32_MAX which could be std::numeric_limits<uint32_t>::max(). I have to admit that it makes the code longer too, but it may be easier to change the integer type with a basic find-and-replace if you ever need to.

  • You could also use the C++ algorithms instead of the C library. For example, replace std::memset by std::fill_n, which is likely to be optimized back into an std::memset anyway while being clearer:

      std::fill_n(target_segment, std::size_t((max_bit + CHAR_BIT) / CHAR_BIT), 0);
    
  • It is likely that it has already been optimized by the compiler but since you compute both index / BITS_PER_WORD and index % BITS_PER_WORD, you may as well make even more sure that these two values are computed together by using std::div. It also applies to bits_to_sieve % segment_size and bits_to_sieve / segment_size somewhere else in the program.

  • If you have access to a C++11 compiler, don't hesitate to declare some variables constepxr, like SIEVE_BITS which can totally be computed at compile time.

Source Link
Morwenn
  • 20.2k
  • 3
  • 69
  • 132

Generally speaking, I have to admit that I don't understand everything the code is doing and that I could use a little more time in ordeer to understand it. However, I hope I can still give you some useful pieces of advice:

  • You could use a little more C++ in your code, which could be considered a bit too C-ish. What comes to my mind is BITS_PER_WORD which can actually be obtained with std::numeric_limits:

      enum { BITS_PER_WORD = std::numeric_limits<word_t>::digits };
    

    The same goes for several other constants such as UINT32_MAX which could be std::numeric_limits<uint32_t>::max(). I have to admit that it makes the code longer too, but it may be easier to change the integer type with a basic find-and-replace if you ever need to.

  • You could also use the C++ algorithms instead of the C library. For example, replace std::memset by std::fill_n, which is likely to be optimized back into an std::memset anyway while being clearer:

      std::fill_n(target_segment, std::size_t((max_bit + CHAR_BIT) / CHAR_BIT), 0);
    
  • It is likely that it has already been optimized by the compiler but since you compute both index / BITS_PER_WORD and index % BITS_PER_WORD, you may as well make even more sure that these two values are computed together by using std::div. It also applies to bits_to_sieve % segment_size and bits_to_sieve / segment_size somewhere else in the program.

  • If you have access to a C++11 compiler, don't hesitate to declare some variables constepxr, like SIEVE_BITS which can toally be computed at compile time.