Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a &benchmark option to the time command #1586

Closed
krader1961 opened this issue Jul 24, 2022 · 2 comments · Fixed by #1591
Closed

Add a &benchmark option to the time command #1586

krader1961 opened this issue Jul 24, 2022 · 2 comments · Fixed by #1591

Comments

@krader1961
Copy link
Contributor

While working on issue #1570 I wrote a simple script that used time to test the speed of various sorting options. It would be extremely helpful if the time command had a &benchmark option that would run the function n times (perhaps defaulting to three or five but allowing the user to specify the number of runs) and outputting the best time.

@krader1961
Copy link
Contributor Author

It is probably better to dynamically size the default number of benchmark runs based on the time to run each benchmark; rather than hardcode the default to a value like five. That's because short run times are more likely to be significantly affected by transient factors outside the control of the benchmark. Long benchmark run times are more likely to have transient external factors amortized and thus result in less perturbation from the optimal run time. TBD is how formal this scaling of runs should be versus a "seat of the pants" approach that yields good enough results. One solution is to calculate the variance of run times and perform more runs if the variance is above some threshold while ensuring a minimum number of runs (e.g., totaling ten seconds) when each individual run is an extremely short duration.

@krader1961
Copy link
Contributor Author

This issue is interesting. It's semi-relevant since it applies to a C++ benchmark framework but the issues it explores are relevant to benchmarks of any other language. I've read many similar threads regarding optimizing the optimal number of benchmark runs. As far as I can tell there is no consensus on the optimal number of times to execute a benchmark. So I'm inclined to implement a &benchmark option that works for my situation and leave optimizations for future changes.

krader1961 added a commit to krader1961/elvish that referenced this issue Jul 30, 2022
krader1961 added a commit to krader1961/elvish that referenced this issue Jul 30, 2022
krader1961 added a commit to krader1961/elvish that referenced this issue Sep 18, 2022
krader1961 added a commit to krader1961/elvish that referenced this issue Sep 18, 2022
@xiaq xiaq closed this as completed in eb1770f Nov 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant