开源日报 每天推荐一个 GitHub 优质开源项目和一篇精选英文科技或编程文章原文,坚持阅读《开源日报》,保持每日学习的好习惯。
今日推荐开源项目:《插件英雄榜 ChromeAppHeroes》
今日推荐英文原文:《How Do You Quality》

今日推荐开源项目:《插件英雄榜 ChromeAppHeroes》传送门:GitHub链接
推荐理由:对 Chrome 上好的插件进行推荐的英雄榜。顾名思义,这个项目中收录了一些在 Chrome 中好用的插件,也提供了从 Chrome 插件商店中获取插件的方法,对于经常使用 Chrome 的朋友来说可以看看有没有什么好用的工具可以添加一下了,而对于使用 firefox 的朋友来说……兴许现在开始就可以为了新项目火狐插件英雄榜着手调查一下好的插件了。
今日推荐英文原文:《How Do You Quality》作者:Joe Castro
原文链接:https://medium.com/@joecastr/how-do-you-quality-e2518f865e4
推荐理由:作者认为的衡量自己工作质量的标准

How Do You Quality

A question I ask in interviews is “How do you verify the quality of your work?” It’s really not intended as a loaded question. While there are plenty of bad answers I don’t think there’s a singularly correct one. When I’ve heard others ask the same thing, though, I know they were waiting to check the boxes from hearing “unit tests”, “integration tests”, “code reviews”, etc. It is a question that lends itself to a buzzword bingo card.

I should write a separate post about how and why I conduct interviews the way I do, but overall I’m treating it as a conversation and there’s give and take on both sides. What have you done in the past?… What did you find worked well?… What would you do differently now?… When I’ve done that before here’s what I saw… Does that match your experience, or did you mitigate it somehow?… Ultimately anyone who works professionally as a software developer is solving identified business problems and the go to tool in their belt is code. And once you separate it from any kind of product design that code either solves the problem well or it doesn’t. The only metric that matters is “Will the business still be around tomorrow?” Crappy quality in code can drag down that metric, but not always in obvious ways.

There’s a variation of the original question that I’ve been asked when I’m on the other side of the interview table, “How do you define good code?” I already alluded to my answer to that, but it’s worth listing out qualities that other professional, employed engineers may give to it.

Good code has:
  • No bugs
  • Code reviews
  • Lots of comments
  • Lots of unit tests
  • High code coverage from tests
  • Verbose logging
  • Low cyclomatic complexity
  • Preference for interfaces over classes
  • A consistent style that follows the team’s guidelines
  • Source control
  • Dependency injection
  • Good performance
  • Good scalability
  • Ease of understanding
I’m not going to shoot those down. They’re generally fine things to have. Other than performance and bug count (and really, if performance is poor enough that it matters then it is a bug), none of those really directly tie back to solving a business objective. For the sake of setting up a steelman rather than a strawman I’ll add
  • The ability to adapt readily and with high confidence to changing business requirements
But I really don’t hear that one articulated often, either.

Why do developers give those answers to that question? Those are the kind of things that are measurable and within one’s control. Is my code high quality? Of course! I have 10,000 unit tests! Any of those answers can also be a crutch, because while they’re potentially useful they’re not alone sufficient. And they’re not free. If those 10,000 tests never catch a bug before a customer sees it then they weren’t actually providing any value. They were costing development time to write them and run them, even if that run time cost is paid by build machines it still takes that much longer to get the code off the developer’s machine and on its way to production. Those tests are also friction for making changes, and they have to be maintained just like any customer facing code. Most of the things in that list directly affect a developer’s quality of life. They’re going to be working in other people’s changes and debugging and trying to shape it to do new things while still doing old things, and for the most part the nameless developer in this scenario wants to quickly figure out the right place to do something, get it done, move past it, and not have to revisit it later because it turns out that there were some unintended consequences to their work that got discovered in production. That’s good enough motivation and that can be a fine list for those goals. If people are dealing with an inconsistent and unpredictable code base then it’s going to lead to frustration and ultimately attrition (one of the non-obvious ways bad code has an impact). Get enough attrition and it’s going to directly affect the business metric.

So with all that as preamble, here are some of my controversial opinions about code quality that are either more nuanced than modern conventional wisdom, or run completely counter to it. They’re all formed based on experience rather than ideologies.

Tests have a very specific and limited usefulness. Generally it’s better to err on the side of fewer tests

Unit tests have costs: time to write, time to run, time to maintain. Unit tests functionally really only do two things: they exercise code, and when coupled with code coverage they can show how to hit error cases in the code. They don’t verify correctness; they make sure the code behaves the way it was written. There will never be unit tests that catch something that wasn’t thought of, and code coverage in these tests is super misleading because of the potential explosion of combinatoric states. Add in mocked dependencies that keep track of how many times a function is called and you just turned your implementation details into part of your interface contract.

Outside of catching egregious errors, unit tests are super valuable for two things:
  1. They can provide documentation for how an API is intended to be used.
  2. They’re a line of defense for good API design. If it is hard to write the test demonstrating how to use the API then the API can be improved.
Treat tests as documentation for a developer, not as a quality gate for preventing bugs. Tests that run through programmatic variations of all parameters serve exactly the opposite of this purpose.

Avoid dependency injection as a pattern. Mocking is an anti-pattern, except in really specific scenarios

DI is a way of making things more testable without thinking about what or why you’re trying to test it. See above. It leads away from principles of loose coupling and tight cohesion, even in the name of promoting it. For everything that can be injected that becomes a concrete part of the contract for the code that it will be implemented in terms of those services. Libraries that make mocking objects easy reinforce that behavior. Rather than using parameters to drive configurable behavior it becomes easy to write tests that basically treat implementation details as configuration objects. It becomes very difficult to reason about the true production state of those dependencies and how they’re being provided and interacting with each other.

The three scenarios where it makes sense to have mocking are storage, network, and time services. Unless you’re working on an actual storage, network, or time library, those are resources that are entirely outside the domain of most code’s business logic and in as much as there are tests, mocking them gives the ability to have more reliable and repeatable success and failure verification.

Behavioral metrics are superlative

It is super important to understand how things are running in production. Whereas tests can only tell you answers to questions someone thought to ask, metrics can yield answers that beg new questions. Build dashboards based on important behavioral metrics. Build themed dashboards for different aspects of different features. Operational run books should have links to these dashboards. Whenever someone writes a new feature there should be metrics in place, and a sense of what those metrics will look like if things are going right and also if they’re going wrong.

Focus on behavioral metrics. Latency and counters are easy things to record and match a developer’s mind set in the code. Avoid falling back on those being the core metrics emitted. This is the data-science version of “the tests verify that the code runs the way I wrote it.” Emit metrics based on external, ideally the way the customer experiences it, observations. Don’t record something just because it was already conveniently expressed as a number.

Instrument competing indicators. In as much as metrics can be used to measure success of a feature they can also be gamed. This won’t usually be malicious, but it’s the nature of measuring anything. If you’re trying to reduce the latency on an API that returns a list of the top 20 items for X a cheap way to reduce it is to change that 20 to 10. Is that the right/wrong thing to do? I don’t know. It shouldn’t be done arbitrarily and there really had better be something else being measured that indicates the holistic impact of any trade off being made.

Tie alarms and deployment to metrics monitoring. If a code change causes metrics to move in an unexpected way the deployment should be automatically stopped. A key part of this is that it is automated. Don’t expect to be watching every dashboard with thorough diligence for every deployment. There should be so many metrics and so many alarms for any application of reasonable scale that it’s infeasible to manually watch them all.

This mostly applies to server software, but more and more everything is effectively moving this way. This is equally applicable to client-side telemetry.

Now, tying this back into the original thesis: Do these suggestions help improve the macro business metric?

Maybe. In my experience, I think so. They can certainly be abused by following the letter and not the spirit. I’d hate for someone to read this and use it as an argument for not writing tests and then not doing anything else to replace those tests they didn’t write. There are additional things I have opinions on about actual code level constructs and architecture, but since those aren’t really about keeping the business alive they’re not pertinent to this write up.

These opinions are my own and I reserve the right to get smarter, later. Let me know if any of this rings true or especially if I seem to have completely missed the mark on something.
下载开源日报APP:https://openingsource.org/2579/
加入我们:https://openingsource.org/about/join/
关注我们:https://openingsource.org/about/love/