πBenchmarks
Measuring performance to find out how fast Results are
Throughout these guides, we have mentioned that throwing Java exceptions is slow. But... how slow? According to our benchmarks, throwing an exception is several orders of magnitude slower than returning a failed result.
This proves that using exceptional logic just to control normal program flow is a bad idea.
Benchmarking Result Library
This library comes with a set of benchmarks that compare performance when using results versus when using exceptions.
Simple Scenarios
The first scenarios compare the most basic usage: a method that returns a String
or fails, depending on a given int
parameter:
Using Exceptions
public String usingExceptions(int number) throws SimpleException {
if (number < 0) {
throw new SimpleException(number);
}
return "ok";
}
Using Results
public Result<String, SimpleFailure> usingResults(int number) {
if (number < 0) {
return Results.failure(new SimpleFailure(number));
}
return Results.success("ok");
}
Complex Scenarios
The next scenarios do something a little bit more elaborate: a method invokes the previous method to retrieve a String
; if successful, then converts it to upper case; otherwise transforms the "simple" error into a "complex" error.
Using Exceptions
public String usingExceptions(int number) throws ComplexException {
try {
return simple.usingExceptions(number).toUpperCase();
} catch (SimpleException e) {
throw new ComplexException(e);
}
}
Using Results
public Result<String, ComplexFailure> usingResults(int number) {
return simple.usingResults(number)
.map(String::toUpperCase, ComplexFailure::new);
}
Conclusion
We provided insights into the Result library's performance through benchmarking. While our metrics corroborate that most codebases could benefit from using this library instead of throwing exceptions, its main goal is to help promote best practices and implement proper error handling.
Last updated
Was this helpful?