開源日報 每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,堅持閱讀《開源日報》,保持每日學習的好習慣。
  • 今日推薦開源項目:《全家福 all-contributors》
  • 今日推薦英文原文:《On Programming Languages, Culture, and Benchmarks》

今日推薦開源項目:《全家福 all-contributors》
推薦理由:這是一個規範——所有對開源項目做出過貢獻而不僅僅是提供了代碼的貢獻者都應該得到認可。這個項目中包括了有關這個規範的一些細節和具體要求,興許它會為之後的開源項目所有者都提個醒——一個開源項目並不只是需要代碼的,代碼以外的東西和代碼都一樣重要,為這些東西提供幫助的貢獻者一樣應該得到應有的認可。
今日推薦英文原文:《On Programming Languages, Culture, and Benchmarks》作者:Jon Bodner
原文鏈接:https://medium.com/capital-one-tech/on-programming-languages-culture-and-benchmarks-87869fa88ba6 推薦理由:對 Go 和 Java 進行比較——在文化和基準測試兩方面

On Programming Languages, Culture, and Benchmarks

4 Comparisons Between Go and Java
One thing that non-programmers often find surprising about programming is that different languages have different communities with different cultures. These cultures dictate things both large (how people decide what new features are added to a language) and small (tabs vs. spaces). They also pop up in interesting ways. Recently, I was pulled into a discussion about the cost of reflection in Java vs. the cost in Go. I didn』t know the answer, so I wrote some benchmarks to see what the difference was. The most interesting part wasn』t the results, it was the design philosophy around benchmarking and what it revealed about the cultures of the respective languages. Here』s what I observed.
1. Benchmarking is a core part of Go, an optional part for Java
Go』s benchmarking support is integrated into the testing package that』s built into the standard library. Documentation on writing and running benchmarks is included as part of the standard documentation. The benchmarks are included as part of the project that is being benchmarked.
In Go, benchmarks are run with the command go test -bench=.. The . is a regular expression for the name of the benchmark functions you want to run; a dot means to run everything. There are additional flags as well that control other aspects of benchmarking, like whether to benchmark memory in addition to performance, or how long to run the benchmarks. And, as we』ll discuss in a bit, this integration with the standard distribution has other implications as well.
Java』s approach is different. The standard Java benchmarking library is JMH. Even though it is written and maintained by Oracle, it isn』t bundled with the standard library. The recommended way to use JMH is to create a separate benchmarking project. Developers then use maven (the popular third-party project management tool) to run this not-so-simple command:
mvn archetype:generate -DinteractiveMode=false -DarchetypeGroupId=org.openjdk.jmh -DarchetypeArtifactId=jmh-java-benchmark-archetype -DgroupId=org.sample -DartifactId=test -Dversion=1.0
When it』s time to run your JMH benchmarks, the standard way is to use the commands mvn clean install; java -jar target/benchmarks.jar. This builds and runs the benchmarks in your benchmark project. There are options, lots of them. The JMH code runner has over 30 command line flags. If that』s not enough control, you can write your own code runner and configure the options directly:
public static void main(String[] args) throws RunnerException {
    Options opt = new OptionsBuilder()
            .include(MyBenchmark.class.getSimpleName())
            .forks(1)
            .timeUnit(TimeUnit.NANOSECONDS)
            .mode(Mode.AverageTime)
            .build();
    new Runner(opt).run();
}
One effect of including a standard benchmark runner in Go is that I have seen many more examples of benchmarking in Go than benchmarking in Java.
2. Declaring benchmarks is surprisingly similar between Java and Go
To create a benchmark in Go, you add a new function to a test file in your project. Test files are simply files whose names end in test.go. Each benchmark function』s name starts with the word Benchmark and takes in a single parameter of type testing.B. This follows on from the pattern for testing in Go, which uses a function whose name starts with the word Test and takes in a single parameter of type testing.T. What is interesting is that configuration by function name is a bit more 「magical」 than the usual Go style. As a general design rule, Go favors explicit invocation over implicit invocation. But in the case of testing and benchmarking, Go relies on a test runner that looks for functions with particular name structures to know that they should be invoked. This style stands out in Go because it is so uncommon.
Creating benchmarks with JMH is similar to the process in Go. You create a new class to hold the benchmarks, and then annotate benchmark methods with @Benchmark. Since the benchmarks are in separate project from the code being measured, you use maven to reference your code as a library. This is a common pattern for Java; annotations are used to mark methods that are expected to behave in a special manner, and there』s a part of the program whose job is to scan the classpath and find methods marked with the annotation, so they can be executed.
3. Writing a benchmark in Go asks more from developers than Java does, but gives them more control over timing
Writing a benchmark in Go is a bit more complicated than writing one in Java. Benchmarking requires multiple runs to get accurate measurements. In Go, you need to explicitly set up the loop for the benchmark run using a value supplied by the benchmark runtime. I also had to write my own blackhole function to eat the output so that it wouldn』t be optimized away by the compiler. If you want to set up some data before the test runs, or if you want to exclude some logic from being timed, you can explicitly stop, start, and reset the timer:
func BenchmarkNormalSetPointer(b *testing.B) {
        d := &Data{A: 10, B: 「Hello」}
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
                normalSetPointer(d)
        }
}

func normalSetPointer(d *Data) {
        d.A = 20
        blackhole(d)
}
Java』s benchmarking only requires the actual business logic. The looping is done for you, and JMH provides a blackhole utility class to swallow output to prevent optimizing it away:
@Benchmark
public void normalSetPointer(Data data, Blackhole blackhole) {
    data.a = 20;
    blackhole.consume(data);
}
In order to set up the data for the benchmark and exclude the set up time from the measurements, JMH requires you to create a static inner class and annotate it as being 「State」:
@State(Scope.Thread)
public static class Data {
    public int a = 1;
    public String b = 「hello」;

    public String getB() {
        return b;
    }
}
When using JMH, I couldn』t find a way to exclude part of the time inside of a benchmark or to reset the timings.
4. Go』s benchmarks have limited configuration and good integration. Java』s are the opposite.
Go』s benchmarking isn』t very configurable. You can specify that the benchmarks run for a specific number of times, for a minimum duration, or with a specific number of CPU cores. When you run benchmarks, the output is written to the console in the units that make sense to the benchmarking tool:
BenchmarkDoNothing-8          2000000000 0.29 ns/op
BenchmarkReflectInstantiate-8   20000000 110 ns/op
BenchmarkNormalInstantiate-8  2000000000 0.29 ns/op
BenchmarkReflectGet-8           10000000 156 ns/op
You can also get the results in JSON:
{"Time":"2018–06–29T12:11:39.731321926–04:00","Action":"output","Package":"github.com/jonbodner/reflect-cost","Output":"BenchmarkDoNothing-8 \t"}
{"Time":"2018–06–29T12:11:40.355509283–04:00","Action":"output","Package":"github.com/jonbodner/reflect-cost","Output":"2000000000\t 0.30 ns/op\n"}
{"Time":"2018–06–29T12:11:40.355845048–04:00","Action":"output","Package":"github.com/jonbodner/reflect-cost","Output":"BenchmarkReflectInstantiate-8 \t"}
{"Time":"2018–06–29T12:11:42.667043237–04:00","Action":"output","Package":"github.com/jonbodner/reflect-cost","Output":"20000000\t 109 ns/op\n"}
Unfortunately, the JSON output is not very useful. First of all, while each line is valid JSON, there is no wrapping array or object around all of the lines; you have to construct one yourself. You might expect that each benchmark would generate a JSON record with separate fields for the name of the benchmark, the number of iterations it took to get a stable answer, the time it took, and the units. Instead, the records have an 「Output」 field, that requires you to merge the value of consecutive records to reconstruct the text output, which then needs to be split on tabs and spaces to find the desired values. Given these limitations, it』s easier to forgo the JSON, direct the text output to a file, and parse.
Go benchmarks are not limited to timing information. They integrate with Go』s built-in code coverage and profiling support, giving you the option of displaying memory allocation information and allowing you to write both timing and memory information to profiling files that can be run through the pprof tool included with Go.
JMH is very configurable. You can choose the time units (ns, ms, etc.), whether you want throughput (ops/time), average time (time/op), sampling time, or a single run time. You can have the output in text, CSV, SCSV, JSON, or LaTeX. You can get it to output some memory or threading profiling results. However, I don』t know of any way to use this output with another tool. If you want to get more detailed information, you』ll need to upgrade to something else.
Programming Language Culture Matters
As someone who has spent decades writing Java, and several years writing Go, I find these kinds of comparisons fascinating. Lately, I』ve been enjoying writing Go more than writing Java. I think the culture of Go better reflects how I like to write software, and benchmarking is another area where Go』s approach agrees with my thinking. Go takes the 「batteries included」 approach to its standard library and tooling; you get quite a lot included as part of the standard distribution, but that also means accepting the choices made by the team that maintains Go. By including simple benchmarking support as part of the standard library and tooling, and integrating it with a profiling toolkit that』s bundled with the Go development tools, you get a 「good enough」 solution for the most common cases. But it』s one that requires you to do some extra work (write your own benchmarking loops and blackhole function) and doesn』t do things that the Go team considers unimportant (such as usable JSON output).
There』s nothing wrong with Java』s approach if you agree with the Java design philosophy and culture of Java development. While benchmarking support isn』t included in the JDK, Java does bundle some profiling tools like jhat, jstat, and hprof. Unfortunately, they are either considered experimental or produce poor results. Other tools, like JVisualVM and Java Mission Control, have been open sourced and future development is unsure. The net result is that Java relies on third parties to provide large parts of its developer tooling. This has encouraged a robust third-party ecosystem, but this philosophy makes it harder to get started if you don』t know where to begin. Also, it is sometimes difficult to get tools to work together. Libraries in Java tend to have lots of configuration choices as the Java ecosystem is focused on configurability. There』s probably no better way to understand the different attitudes about configurability between Java and Go than by looking at their garbage collectors. There are over 50 different flags that you can set to configure the behavior of the multiple garbage collectors included in the JVM. Go has only one garbage collector and there is only one configuration flag for that collector.
In both cases, these choices are not intrinsic to the language; they are entirely artifacts of culture. This is what makes language wars a bit foolish. Which language you prefer to use is more a matter of the culture that suits your programming style best, and less a matter of the actual functionality the language provides. It』s also a matter of exposure; if you don』t try other languages, you』ll never know if there』s a better culture fit for you out there. Don』t disparage other languages; give them a try and see how they work for you. You might be surprised where you end up.