Skip to content

Conversation

@aviatesk
Copy link
Member

@aviatesk aviatesk commented Aug 5, 2021

The optimizer code is full of loop constructions and I found they're
really better to not run in interpreter.

With this PR, we create the caches of the optimizer code first and
it speeds up the succeeding bootstrapping to create the caches for
the overall inference code.

On my machine, the bootstrapping took about 80 seconds previously,
but on this PR the time is reduced to about 60 seconds.

The optimizer code is full of loop constructions and I found they're
really better to not run in interpreter.

With this PR, we create the caches of the optimizer code first and
it speeds up the succeeding bootstrapping to create the caches for
the overall inference code.

On my machine, the bootstrapping took about 80 seconds previously,
but on this PR the time is reduced to about 60 seconds.
@oscardssmith
Copy link
Member

I don't fully understand this, but I'll gladly take 20 second faster CI!

@aviatesk aviatesk merged commit d18d8a4 into master Aug 5, 2021
@aviatesk aviatesk deleted the avi/bootstrap branch August 5, 2021 17:12
# especially try to make sure any recursive and leaf functions have concrete signatures,
# since we won't be able to specialize & infer them at runtime

let fs = Any[typeinf_ext, typeinf, typeinf_edge, pure_eval_call, run_passes],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that besides reordering, pure_eval_call has been removed. How come that it isn't needed anymore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pure_eval_call can be easily inferred from the entries like typeinf, because we make sure Core.Compiler is very type stable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

5 participants